# DepthBuffer Visualization Explanation

Hello there
I have to say I just started yesterday to use BabylonJs. The community so far has helped me a lot with all the forum posts. So, first of all, THANK YOU.

At the moment I try to learn how I can use the DepthBuffer. To understand the concepts of the different viewpoints (global, screen, worldâ€¦). I would like to visualize the depth of my objects. I read various tutorials like this oneâ€¦ but I am having a hard time to understand all those different concepts.

My final goal would be to have a possibility to detect vertices intersections for example if I have an ocean and a beach mesh. There I would want to render a different texture (foam). Iâ€™m not nearly thereâ€¦ but thatâ€™s the goal

My issue is that I canâ€™t visualize the depth. Debugging shaders is horrible hard and I guess I misunderstood 200 things already. So let me show you what I got. Thatâ€™s the fragment shader:

``````varying vec2 vUv;
uniform sampler2D uDepthMap;

float linearizeDepth(sampler2D depthSampler, vec2 uv)
{
float n = 1.0; // camera z near
float f = 10000.0; // camera z far
float z = texture2D(depthSampler, uv).x;
return (2.0 * n) / (f + n - z * (f - n));
}

void main(void)
{
float value =  linearizeDepth(uDepthMap,vUv);
gl_FragColor = vec4(vec3(value),1.0);
}
``````

And the paramaters are set like

``````const depthTexture = this.renderer.getDepthMap();
waterMaterial.setTexture('uDepthMap', depthTexture);
``````

The vUv I get from the vertex shader.

So im pretty sure I have an issue of understanding HOW this should work. I copy pasted most of the stuff together. My understanding is that the DepthBuffer contains all the depth values of a certain point in the world relative to the camera. Then with the corresponding vUv we can access the â€śdepth map textureâ€ť and read the value which the will be used to define the color of this point.

I really hope I did not talk to much nonesense

Thanks for you time.

You can have a look at this sample, that comes with the â€śTransparency and How Meshes Are Renderedâ€ť doc (Transparency and How Meshes Are Rendered - Babylon.js Documentation):

https://www.babylonjs-playground.com/#1PHYB0#81

By pressing F9 you will switch in a â€śrender depthâ€ť mode.

Of note is that `scene.enableDepthRenderer()` will enable a depth renderer that renders linear depth in the texture. Thatâ€™s why the pixel shader used in this sample simply uses the value red from the texture and does not convert it before updating `gl_FragColor`.

If you want to see non linear depth, use `scene.enableDepthRenderer(camera, true)`.

I think I got itâ€¦well at least some parts. Many thanks for the link.
My brain is literally burningâ€¦ I read so many articles and Iâ€™m pretty sure I donâ€™t get it 100%.
I created a small test on the Babylon playground. Cool web-application by the way

https://playground.babylonjs.com/#KW6JCK#16

It does exactly what I wanted. But Iâ€™m not sure I fully understand it all correctly. Would it be possible for you to read the comments I added to see If I understand how the positioning works?
I really want to understand how these different systems work (model, world, view).

Would there be an easier way to achieve this effect? I copied most of this together and maybe Iâ€™m doing everything to complicated. Some articles were from 2015â€¦so maybe tech changed.

Many thanks

guess i mixed some stuff upâ€¦ i enabled â€śstoreNoneLinearDepthâ€ť by accident. With it on it worksâ€¦ if its off its does not.

I have some visual issues you will notice when you zoom out. Im still trying to figure out whats going on there.

It seems good to me and I donâ€™t see artifacts when zooming in/out.

The `alpha = 0.0` hack does work but as you guess it is not what you want to do.

What you really want is the water plane not te be rendered by the depth renderer in the offline depth buffer. `alpha = 0.0` does work because when `alpha < 1`, the engine flags the material as needing alpha blending, and alpha blended meshes donâ€™t write into the depth buffer. Itâ€™s a convoluted way to achieve what you want, and having your water plane flagged as needing alpha blending may not be what you want in the end. Note that you could use any alpha value strictly `< 1` for the hack to work.

A cleaner way to do it is simply to pass the list of meshes you really want to be renderer by the depth renderer and exclude the water plane from this list:

``````renderer.getDepthMap().renderList = [sphere, ground];
``````

https://playground.babylonjs.com/#KW6JCK#17

``````// world space
vPositionV = worldView * vec4(position, 1.0);
// view space
vClipSpace = gl_Position;
// world distance camera to water fragment
waterDistanceToCamera = vPositionV.z;
``````

it should be more something like:

``````// view space
vPositionV = worldView * vec4(position, 1.0);
// clip space
vClipSpace = gl_Position;
// z-distance from camera to water fragment
waterDistanceToCamera = vPositionV.z;
``````

Thereâ€™s a nice sum up of the different spaces here: matrix - Screen space coordinates to Eye space conversion - Computer Graphics Stack Exchange

Note I renamed â€śdistanceâ€ť to â€śz-distanceâ€ť because a distance should take into account the x and y components of `vPositionV` (and also of the camera, but in view space the camera is at (0,0,0) anyway). You could also call it â€śdepthâ€ť.

Note also that for perspective projection, `gl_FragCoord.w = 1 / z_distance_to_camera_in_view_space`. So, you could remove the `waterDistanceToCamera ` variable and use in the fragment shader `1.0 / gl_FragCoord.w` instead if you wanted to.

`vec2 ndc = (vClipSpace.xy / vClipSpace.w) / 2.0 + 0.5;`

I would call this `screenCoord` instead. The ndc coordinates would be just `vClipSpace.xy / vClipSpace.w`.

`// Q: this is normalized right? so the values can be between 0 and 1. 1-> at far clip, 0 at near clip?`

Indeed. In NDC space, the coordinates are between -1 and 1 for all 3 components (in D3D, Z is between 0 and 1) and in screen space they are between 0 and 1. For the Z coordinate, 0 = near plane and 1 = far plane in standard settings for the depth buffer (in reversed depth buffer, 1=near and 0=far).

`// Question: What values do we expect for world distances? what unit?`

Thatâ€™s the unit you want People like to use 1 = 1 meter but it doesnâ€™t matter.

Regarding your calculation for the depth Iâ€™m not sure itâ€™s the right oneâ€¦

I have changed it to compute the linear z-distance of the water plane in clip space (but remapped from [-N, F] to [0, 1]) in the fragment shader, as the values you have in the `depthSampler` (generated by the depth renderer) are also linear z-distance in clip space, remapped to [0, 1].

I have scaled the difference of depths by 150 before using it for the `mix`, that way we can see the pixels darkening when the difference of depth is raising (meaning when the water plane goes up), something that does not happen with your original computations, making me think they are not ok.

https://playground.babylonjs.com/#KW6JCK#18

Comment the `#define USE_ALTERNATE_DEPTH_COMPUTATION` line to use your original code.

First of all sorry for my late answer. You took a lot of your time to help me. Thank you a lot. With your help, I was able to achieve the expected style and behaviour.

I guess my biggest issue is still to know what value/range I can expect. Itâ€™s not like you can debug and check the value of a variable like in â€śnormalâ€ť programming.

1 Like