Hello there
I have to say I just started yesterday to use BabylonJs. The community so far has helped me a lot with all the forum posts. So, first of all, THANK YOU.
At the moment I try to learn how I can use the DepthBuffer. To understand the concepts of the different viewpoints (global, screen, world…). I would like to visualize the depth of my objects. I read various tutorials like this one… but I am having a hard time to understand all those different concepts.
My final goal would be to have a possibility to detect vertices intersections for example if I have an ocean and a beach mesh. There I would want to render a different texture (foam). I’m not nearly there… but that’s the goal
My issue is that I can’t visualize the depth. Debugging shaders is horrible hard and I guess I misunderstood 200 things already. So let me show you what I got. That’s the fragment shader:
varying vec2 vUv;
uniform sampler2D uDepthMap;
float linearizeDepth(sampler2D depthSampler, vec2 uv)
{
float n = 1.0; // camera z near
float f = 10000.0; // camera z far
float z = texture2D(depthSampler, uv).x;
return (2.0 * n) / (f + n - z * (f - n));
}
void main(void)
{
float value = linearizeDepth(uDepthMap,vUv);
gl_FragColor = vec4(vec3(value),1.0);
}
And the paramaters are set like
const depthTexture = this.renderer.getDepthMap();
waterMaterial.setTexture('uDepthMap', depthTexture);
The vUv I get from the vertex shader.
So im pretty sure I have an issue of understanding HOW this should work. I copy pasted most of the stuff together. My understanding is that the DepthBuffer contains all the depth values of a certain point in the world relative to the camera. Then with the corresponding vUv we can access the “depth map texture” and read the value which the will be used to define the color of this point.
I really hope I did not talk to much nonesense
Thanks for you time.