Clip space to world space with non-linear reverse depth buffer with WebGPU

Hello everyone!

I have a glsl code snippet that uses the default depth renderer (with non-linear depth enabled) to find the world space position of each pixel of the screen:

vec3 worldFromUV(vec2 pos, float depth) {
    vec4 ndc = vec4(
        pos.xy * 2.0 - 1.0,
        depth,
        1.0
    );
    vec4 posVS = inverseProjection * ndc;
    vec4 posWS = inverseView * posVS;
    return posWS.xyz / posWS.w;
}

And here is a playground that uses it:

Now I want to use the reverse depth, and I found online that there are some changes to the projection matrix, but I think it is already handled by the engine.

The other modification is that the near plane is at z=1 and far plane at z=0, so I changed the z component of ndc from depth to 1.0 - depth. The result is this:

It seems the output of the worldFromUV function is too small, I can get the correct result by multiplying by 10 or so, but it is not a general solution.

Does someone see what could be missing?

You don’t have anything to do to handle reverse depth buffer:

In this PG, I simply display the output of worldFromUV. You will see the result is the same whether engine.useReverseDepthBuffer is true or false.

If you set a background plane (as I did in my PG), you get the same result with your PGs in both cases (using depth and not 1-depth in the reversed case), so your calculation must rely on the clear value used for the depth buffer (which is 0 in the reversed case, and 1 in the normal case).

1 Like

You save me again! I changed the clear color of the depth renderer and now it works perfectly as expected:

Thanks a lot :slight_smile: