I’m new to Babylon.js. I’m trying to implement the following: Using a third person camera I want to visualize what a character can see. So I can not simply render the scene from the third person camera. I want every vertex in the scene that cannot be seen from the characters point of view to result in a black fragment. I think it’s similar to how shadow mapping works. The only difference is that I’m not using a light (cone) but a camera (frustum) and everything that would be in the shadow is simply black. For lack of a better visualization, here’s an image of a visibility polygon:
Imagine that but in 3D and everything not yellow will be black.
Since I’m new to Babylon.js and graphics programming in general I’m struggling to understand if this is even possible in the way I imagine. After days of trial and error here’s my current idea of how to do it:
First Pass: Depth Texture Generation
Render the scene from the secondary camera’s perspective.
Write only depth values to a depth buffer or texture.
This pass does not require color information, only geometric depth.
Second Pass: Visibility Determination
Render the scene from the main camera’s perspective.
For each fragment, transform its world position into the secondary camera’s clip space.
Compare the transformed fragment depth with the corresponding value in the depth texture:
If the fragment’s depth is less than or equal to the stored depth, it is visible from the secondary camera.
Otherwise, it is occluded.
Besides all the struggles with trying to get a depth texture to render into a render target texture and using that in a shader I would love for someone to just help me out by telling me if what I’m trying to do even makes sense or not and if there might be a different approach that works.
Even though it doesn’t work here’s a playground with what I’ve tried to do so far: Babylon.js Playground
What you need is an occlusion map, centered on a point, which is basically the compute done by a shadow caster on a PointLight.
What you want is computing something like that, right ?
I would say either you use an actual light and shadow caster like I did (and render some stuff, use post prod, etc to reach your desired result) either you do a custom shader, and then I guess you could inspire yourself from the already existing source code of this shadow generator
Oh wow, this is very close to what I want. The only difference is that the point light is not directed like a spot light or a camera frustum and I wouldn’t want artifacts like that:
For each mesh you would create a defaultMaterial to be used as RGB, otherwise you would use the same maskMat material for all meshes on the scene, like so :
Even though this isn’t going to produce the exact result I had in mind it’s very interesting regardless. Thanks for sharing this approach. I’ll do a little more digging and maybe come back with some more specific questions.
I’ve made some progress and can now render only the fragments that are inside the secondary camera’s frustum. However using the depth to calculate whether a fragment is visible or not still doesn’t work. I just can’t figure it out. If just render the value from the secondary camera’s depth map it kind of looks like I’ve implemented shadow mapping.
Looks cool but is not what I’m trying to achieve. I’ve commented out the part of the fragment shader that doesn’t work: Babylon.js Playground The comparison of the depth values doesn’t seem to work.
There are some artifacts when the angle between the direction to the camera and the normal is close to 90° but I guess there isn’t much I can do about that.