Using a custom depth texture source with post processes, rather than the DepthRenderer

In my Babylon.js project, there’s a MultiRenderTargetTexture (basically a g-buffer but with some more custom channels). The scene meshes get rendered into the MRT. Then the MRT gets composited into a floating-point buffer where lighting is applied. Post process fx from the DefaultRenderingPipeline does the final composition from the lighting buffer to the canvas.

One of the post processes I want to leverage is Depth of Field, which uses the Scene’s DepthRenderer to generate a depth texture… My understanding is that the DepthRenderer outputs linear distances from the camera (not z-depth values like those encoded in a depth buffer).

However, because I already have a depth buffer from the MRT, I’d like to reuse that as the depth texture rather than relying on the DepthRenderer to generate a new one… The motivation here is purely performance related. Having the DepthRenderer re-render the scene is expensive, and while cheaper, the other option would be to do an additional full-screen pass to convert the depth buffer values from the MRT the space that the DepthRenderer expects. But that is another fullscreen pass, so also not ideal.

Re-using the depth buffer from the MRT seems like the most efficient approach. However, it’s not immediately possible, because the values stored in the MRT’s depth are the actual z post projection values used for depth testing-- not the easier to use linear depth values like the DepthRenderer outputs.

To support using a custom depth texture in place of the DepthRenderer, I think a few things would need to be added:

  • A way to override the DepthRenderer or otherwise re-direct the scene to use a specified depth texture.
  • A common shader function used to transform from the z-depth of a depth texture to a linear distance (so it’s usable by post processes that sample from the texture).
  • And because there could be different projections used when rendering (e.g.
    reversed-z, infinite far planes, or a simple non-reversed near-far plane), different modes of depth reconstruction may need to be implemented.

Then, the post-processes (or anything that needs to sample depth in the shader) can call into a common shader function whose implementation would be ifdef’d to handle correctly sampling/transforming the depth value.

With all that in place, if one already has a depth buffer that could be bound as texture, then that buffer could be reused for pipelines and post processes, avoiding the overhead of DepthRenderer.

I wanted to throw this “feature request” out there to get some thoughts, or if there are other avenues to achieve this :slight_smile:

If you create a depth texture for the first post process (as decribed here for eg) you can reuse it in other post processes (or even in the first post process!) as an input texture.

Here’s a way to do it:

[EDIT] The depth is not linear, so you will have to convert it to linear if that is what you need.

2 Likes

Thanks for those pointers! Great explanation of the fluid rendering.

And the Playground is somewhat similar to the solution I have now, except the depth texture comes from a MultiRenderTargetTexture.

But the end goal I have is to use this depth texture as the input to Babylon’s DepthOfField post process effect, which like you point out in the edit, would require linear depths.

So my solution now performs an additional fullscreen pass (and allocates an additional fullscreen RT) that coverts the depth texture depths to linear depth. Then I feed the linear depth RT into the depth sample of the DoF post process.

if (renderingPipeline.depthOfFieldEnabled) {
    // This render target has the result of a fullscreen pass the converts a depth buffer texture to linear depths...
    const myCustomLinearDepthRenderTarget = getDistanceRenderTarget(scene);

    // Patch it into the Circle of Confusion depthSampler.
    const dofPostFx = renderingPipeline.depthOfField.getPostProcesses();
    const circleOfConfusionPostProcess = dofPostFx[0];
    lastPostProcessApplyObservers.push(
        circleOfConfusionPostProcess.onApplyObservable.add((effect) => {
            effect.setTexture("depthSampler", myCustomLinearDepthRenderTarget);
                effect.setVector2("cameraMinMaxZ", new Vector2(0, 1)); // depthSampler already has normalized linear values, don't need to rescale by near-far plane.
            })
       );
  }

This works, but perf could be better. It would be nice to avoid the extra fullscreen RT and pass needed to convert from depth buffer depths to linear depth, and instead, sample directly from the depth texture (performing proper conversion from depth to linear depth as part of the DoF post processes circle of confusion shader).

I think that idea of sampling directly from the depth texture and converting to linear depths on the fly is probably a concept that could be reused in other post FX that needs depth (SSAO, SSR, probably others)

1 Like

Indeed, PRs are always welcomed :slight_smile:

Alternatively, you can also create an issue in Issues · BabylonJS/Babylon.js · GitHub and we will see what we can do about it.

1 Like

@BrianK I wonder if smthg like material plugin but for the post process shaders would work here ?

3 Likes

I ll chat offline with @Evgeni_Popov to see how we could potentially create this :slight_smile: sorry for the delay.

1 Like

After deliberation :slight_smile: we won t do a full process plugin system but allow manual targeted injection of code in post processes, this will also be available through the default rendering pipeline with entry points to replace the depth functions.

Issue can be tracked here: Allow customization of the depth sampling function in post processes. · Issue #13243 · BabylonJS/Babylon.js · GitHub