What is the depth value used for depth test?

I tried to visualize the depth map at https://www.babylonjs-playground.com/#RL5CX0#4. It seems clear that the depth values are linear from 0 to 1; 0 represents the near plane and 1 represents the far plane.

In my app, there are two draw calls. In the first one, I write the depth value with gl_FragDepth = xxx;, and I hope the second draw call can use the depth value written by the first to perform the depth test.

However, within gl_FragDepth = xxx; only when xxx is larger than 0.996, which is almost 1.0, can the second scene be rendered. If the value is smaller than 0.996, the second one will be overlapped.

It seems the depth buffer used for the depth test is not linear from zero to one. I have not turned on log depth BTW.

So my question is, what is the actual value used for the depth test? And how can I convert the depth value ranged [near, far] into the appropriate value [0, 1] for the depth test?

Thanks!

Indeed, depth values are not linear. Depending on your use case, you’ll need to convert to linear
IIRC, spaces are not the same between webGPU and WebGL, so conversion need to be a little different.

Seems there’re are three distrubitions for the output of DepthRenderer:

  1. camera space z, valued natively [near, far]
  2. linear depth, valued [0, 1] normalized by the vDepthMetric formular
  3. non-linear depth, valued [0, 1] normalized by projection matrix
/// depth.vertex.fx

/// ...
	gl_Position = viewProjection * worldPos;

	#ifdef STORE_CAMERASPACE_Z
		vViewPos = view * worldPos;
	#else
		#ifdef USE_REVERSE_DEPTHBUFFER
			vDepthMetric = ((-gl_Position.z + depthValues.x) / (depthValues.y));
		#else
			vDepthMetric = ((gl_Position.z + depthValues.x) / (depthValues.y));
		#endif
	#endif
/// ...

/// depth.fragment.fx

/// ...

#ifdef STORE_CAMERASPACE_Z
	#ifdef PACKED
		gl_FragColor = pack(vViewPos.z);
	#else
		gl_FragColor = vec4(vViewPos.z, 0.0, 0.0, 1.0);
	#endif
#else
	#ifdef NONLINEARDEPTH
		#ifdef PACKED
			gl_FragColor = pack(gl_FragCoord.z);
		#else
			gl_FragColor = vec4(gl_FragCoord.z, 0.0, 0.0, 0.0);
		#endif
	#else
		#ifdef PACKED
			gl_FragColor = pack(vDepthMetric);
		#else
			gl_FragColor = vec4(vDepthMetric, 0.0, 0.0, 1.0);
		#endif
	#endif
#endif

/// ...

I don’t quite get the meaning of vDepthMetric

vDepthMetric = ((gl_Position.z + depthValues.x) / (depthValues.y));

Where gl_Position.z should be in clipped space ranged [0, 1], depthValues.x is minZ and depthValues.y is minZ + maxZ, do you know what does this formular mean?

I don’t want to be wrong here. Best person to answer, for sure, is @Evgeni_Popov :slight_smile:

I think this doc page should help:

https://doc.babylonjs.com/features/featuresDeepDive/lights/mathShadows

1 Like

Thanks! This post is very much helpful.