Creating a shader to render everything as black if not visible from a camera in the scene

Hi everyone,

I’m new to Babylon.js. I’m trying to implement the following: Using a third person camera I want to visualize what a character can see. So I can not simply render the scene from the third person camera. I want every vertex in the scene that cannot be seen from the characters point of view to result in a black fragment. I think it’s similar to how shadow mapping works. The only difference is that I’m not using a light (cone) but a camera (frustum) and everything that would be in the shadow is simply black. For lack of a better visualization, here’s an image of a visibility polygon:

Imagine that but in 3D and everything not yellow will be black.

Since I’m new to Babylon.js and graphics programming in general I’m struggling to understand if this is even possible in the way I imagine. After days of trial and error here’s my current idea of how to do it:

  1. First Pass: Depth Texture Generation
  • Render the scene from the secondary camera’s perspective.
  • Write only depth values to a depth buffer or texture.
  • This pass does not require color information, only geometric depth.
  1. Second Pass: Visibility Determination
  • Render the scene from the main camera’s perspective.
  • For each fragment, transform its world position into the secondary camera’s clip space.
  • Compare the transformed fragment depth with the corresponding value in the depth texture:
    • If the fragment’s depth is less than or equal to the stored depth, it is visible from the secondary camera.
    • Otherwise, it is occluded.

Besides all the struggles with trying to get a depth texture to render into a render target texture and using that in a shader I would love for someone to just help me out by telling me if what I’m trying to do even makes sense or not and if there might be a different approach that works.

Even though it doesn’t work here’s a playground with what I’ve tried to do so far: Babylon.js Playground

Thank you! :slight_smile:

Hello and welcome !

What you need is an occlusion map, centered on a point, which is basically the compute done by a shadow caster on a PointLight.

What you want is computing something like that, right ?

I would say either you use an actual light and shadow caster like I did (and render some stuff, use post prod, etc to reach your desired result) either you do a custom shader, and then I guess you could inspire yourself from the already existing source code of this shadow generator :slight_smile:

Oh wow, this is very close to what I want. The only difference is that the point light is not directed like a spot light or a camera frustum and I wouldn’t want artifacts like that:

In the end I want multiple cameras like on a robot so it would look more like this:

Still, I have no idea how to go from this using lights to something that renders what’s visible as “normal”.

Increase the texture size:

 var shadowGenerator = new BABYLON.ShadowGenerator(4096, light);
1 Like

What you can do is creating 2 cameras :

  • One for RGB render
  • One for Black & White mask

For each mesh you would create a defaultMaterial to be used as RGB, otherwise you would use the same maskMat material for all meshes on the scene, like so :

// Params per camera
scene.onBeforeCameraRenderObservable.add((camera) => {
    // Lights
    rgbLight.setEnabled(camera === camera1);
    maskLight.forEach(light=>light.setEnabled(camera === camera2));
    // Boxes
    boxes.forEach(box=>box.material = camera === camera2 ? maskMat : box.defaultMaterial);
    // Ground
    ground.material = camera === camera2 ? maskMat : ground.defaultMaterial;
});
scene.activeCameras = [camera1, camera2];

Once it’s done, camera1 will render RGB, while camera2 will render your mask

Finally, combine them with a custom Post Processing Pipeline :

BABYLON.Effect.ShadersStore["maskFragmentShader"] = `
    #ifdef GL_ES
        precision highp float;
    #endif

    varying vec2 vUV;
    uniform sampler2D textureSampler;
    uniform sampler2D rgbData;

    void main(void) 
    {
        vec4 mask = texture2D(textureSampler, vUV);
        vec4 baseColor = texture2D(rgbData, vUV);
        gl_FragColor = mask*baseColor;
    }
`;

var postProcess0 = new BABYLON.PassPostProcess("RGB", 1.0, camera1);
var postProcess1 = new BABYLON.PostProcess("Shadow", "mask", [], ["rgbData"], 1, camera2);
postProcess1.onApply = function (effect) {
    effect.setTextureFromPostProcess("rgbData", postProcess0);
};
2 Likes

Even though this isn’t going to produce the exact result I had in mind it’s very interesting regardless. Thanks for sharing this approach. I’ll do a little more digging and maybe come back with some more specific questions.

Thank you for your help so far! :person_bowing:

I’ve made some progress and can now render only the fragments that are inside the secondary camera’s frustum. However using the depth to calculate whether a fragment is visible or not still doesn’t work. I just can’t figure it out. If just render the value from the secondary camera’s depth map it kind of looks like I’ve implemented shadow mapping.

Looks cool but is not what I’m trying to achieve. I’ve commented out the part of the fragment shader that doesn’t work: Babylon.js Playground The comparison of the depth values doesn’t seem to work.

If anyone has any pointers I’d appreciate that.

It’s finally working. :smiley:

There are some artifacts when the angle between the direction to the camera and the normal is close to 90° but I guess there isn’t much I can do about that.

Full source code can be found here (implemented as shader material) and here (implemented as material plugin) in case anyone is interested.

Thanks again for your help! :smiley:

2 Likes