Sharing depth buffer between two cameras

Hi,

I’m having troubles playing with multiple cameras…

I want my scene to have an outline postprocess, but I also need to exclude a few meshes from this postprocess.

My idea would be using two cameras parented, one with post process attached, and the other without postprocess rendering layermask 0x10000000.
Then giving this 0x10000000 layermask to the few meshes I want to render without postprocess.

It works, the only issue is that my cameras will not share their depth buffers, and the second is always rendered on top. Can I have them share it ?

(blue cube is rendered with no postprocess, surrounded by red balls)

Maybe there’s another way to do it ? I need to stick with the outline coming from postprocess (instead of geometric outline with mesh properties) because most of the outline constrast will come from lightning or textures.

Thanks a lot for your input ! Have a nice day :slight_smile:

Hello good sir.

I made some attempts to do it with 1 camera here. Gotta sleep for now though. I’m applying the post process to the camera instead of the engine, because otherwise i was getting errors.

–resources on alts
Docs on render targets and render group id
.Render Target Texture With Multiple Passes | Babylon.js Documentation
.Render pass ids | Babylon.js Documentation

here is a similar pg, but its selectively adding to the render list instead of selectively removing. also screen resizing is broken but thats nbd.
.https://playground.babylonjs.com/#7HSBLA#6

I feel like just setting the sphere’s diffuseTexture to the rtt and not using a post process could be doable too? It also might be more flexible if you need to filter multiple variants of a mesh. Just a feeling only.

There’s also the recently added material plugin system, which would allow you to inject your shader code into something that already supports selective rendering.
.Material Plugins | Babylon.js Documentation

Anyway, i’ll check back tomorrow

1 Like

Thanks Jeremy, I’m afraid when I see playground 7HSBLA is having the same “issue” I have with outlines being made regardless of depth ^^’

I’ve made an attempt by rendering noPostProcess camera to another target, and feeding noOutlinePostProcess camera depth map into my post process, but without success so far, https://playground.babylonjs.com/#PS6MUT#16

I’ll retry later out from playground so I can have depth textures drawn in canvases for debug, thanks again and have a nice day !

Regarding this one I have a fix but that does not work with MSAA, so still having a look.

But in any case, I think we should make some changes in the engine so that it’s possible for the user to take control when clearing the framebuffer for a camera. This way, it would be an easy fix (mine is not so easy).

2 Likes

Thanks anyway :slight_smile:

I could make it work if I had a depthMap for each camera,

But I’m stuck because I can’t find a way to have it for the second camera (the one with 0x2000000 layermask)

I would expect to see the depth map of the cube here, on the texture on the right, but I don’t. Did I miss something obvious or is this an internal limitation of BABYLON.Scene.enableDepthRenderer ?

Edit : Got it, it needs “useOnlyInActiveCamera” https://playground.babylonjs.com/#PS6MUT#23

1 Like

And it works ! With useOnlyInActiveCamera I can have two separate depthMap for each active camera, with the 0x20000000 camera rendering to a target texture.

Both depthmap and 0x20000000 camera rendering target texture are then fed to the post process to do the depth test.

Thanks and have a good evening :slight_smile:

Here’s how to make it work with the current Babylon.js codebase (MSAA not fully supported):

You need to create the depth/stencil texture for the Edge post process, so that the depth can be shared with the 2nd camera:

    postProcess.onSizeChangedObservable.add(() => {
        if (!postProcess.inputTexture.depthStencilTexture) {
            postProcess.inputTexture.createDepthStencilTexture(0, true, false, 4);
            postProcess.inputTexture._shareDepth(rtt.renderTarget);
        }
    });

The rtt we share the depth with is created beforehand and is used as the output for the camera:

const rtt = new BABYLON.RenderTargetTexture('render target', { width: engine.getRenderWidth(), height: engine.getRenderHeight() }, scene);
camera.outputRenderTarget = rtt;

This way, the 1st camera will render into a texture instead of the default framebuffer.

This texture is then used as input for the 2nd camera thanks to a pass-through post process:

    const pp = new BABYLON.PassPostProcess("pass", 1, noPostProcessCamera);
    pp.inputTexture = rtt.renderTarget;
    pp.autoClear = false;

And that’s it! Thanks to pp.autoClear = false the rendering from the 1st camera won’t be cleared and the 2nd camera will just render over what already exists in the texture. And thanks to the _shareDepth(rtt.renderTarget) above, the 2nd camera will take the depth buffer coming from the 1st camera into account.

2 Likes

Congrats!

So you can choose between your solution or mine. Note that in mine you don’t have to create depth renderers and perform the check “by hand” in the shader, you can reuse the depth buffer used by the 1st camera with the 2nd one.

Ok both solution works but I went with yours, it’s way less intrusive and worked with only a few lines in my camera creation code, thanks a lot you rock !

4 Likes

Looking good. Even though unused now, good find with useOnlyInActiveCamera. I will take a note for that.

@Evgeni_Popov
congrats on your new job :partying_face:

4 Likes