Sharing depth buffer between two cameras

Here’s how to make it work with the current Babylon.js codebase (MSAA not fully supported):

You need to create the depth/stencil texture for the Edge post process, so that the depth can be shared with the 2nd camera:

    postProcess.onSizeChangedObservable.add(() => {
        if (!postProcess.inputTexture.depthStencilTexture) {
            postProcess.inputTexture.createDepthStencilTexture(0, true, false, 4);
            postProcess.inputTexture._shareDepth(rtt.renderTarget);
        }
    });

The rtt we share the depth with is created beforehand and is used as the output for the camera:

const rtt = new BABYLON.RenderTargetTexture('render target', { width: engine.getRenderWidth(), height: engine.getRenderHeight() }, scene);
camera.outputRenderTarget = rtt;

This way, the 1st camera will render into a texture instead of the default framebuffer.

This texture is then used as input for the 2nd camera thanks to a pass-through post process:

    const pp = new BABYLON.PassPostProcess("pass", 1, noPostProcessCamera);
    pp.inputTexture = rtt.renderTarget;
    pp.autoClear = false;

And that’s it! Thanks to pp.autoClear = false the rendering from the 1st camera won’t be cleared and the 2nd camera will just render over what already exists in the texture. And thanks to the _shareDepth(rtt.renderTarget) above, the 2nd camera will take the depth buffer coming from the 1st camera into account.

1 Like