In my Babylon.js project, there’s a MultiRenderTargetTexture (basically a g-buffer but with some more custom channels). The scene meshes get rendered into the MRT. Then the MRT gets composited into a floating-point buffer where lighting is applied. Post process fx from the DefaultRenderingPipeline does the final composition from the lighting buffer to the canvas.
One of the post processes I want to leverage is Depth of Field, which uses the Scene’s DepthRenderer to generate a depth texture… My understanding is that the DepthRenderer outputs linear distances from the camera (not z-depth values like those encoded in a depth buffer).
However, because I already have a depth buffer from the MRT, I’d like to reuse that as the depth texture rather than relying on the DepthRenderer to generate a new one… The motivation here is purely performance related. Having the DepthRenderer re-render the scene is expensive, and while cheaper, the other option would be to do an additional full-screen pass to convert the depth buffer values from the MRT the space that the DepthRenderer expects. But that is another fullscreen pass, so also not ideal.
Re-using the depth buffer from the MRT seems like the most efficient approach. However, it’s not immediately possible, because the values stored in the MRT’s depth are the actual z post projection values used for depth testing-- not the easier to use linear depth values like the DepthRenderer outputs.
To support using a custom depth texture in place of the DepthRenderer, I think a few things would need to be added:
- A way to override the DepthRenderer or otherwise re-direct the scene to use a specified depth texture.
- A common shader function used to transform from the z-depth of a depth texture to a linear distance (so it’s usable by post processes that sample from the texture).
- And because there could be different projections used when rendering (e.g.
reversed-z, infinite far planes, or a simple non-reversed near-far plane), different modes of depth reconstruction may need to be implemented.
Then, the post-processes (or anything that needs to sample depth in the shader) can call into a common shader function whose implementation would be ifdef’d to handle correctly sampling/transforming the depth value.
With all that in place, if one already has a depth buffer that could be bound as texture, then that buffer could be reused for pipelines and post processes, avoiding the overhead of DepthRenderer.
I wanted to throw this “feature request” out there to get some thoughts, or if there are other avenues to achieve this