I’m having troubles playing with multiple cameras…
I want my scene to have an outline postprocess, but I also need to exclude a few meshes from this postprocess.
My idea would be using two cameras parented, one with post process attached, and the other without postprocess rendering layermask 0x10000000.
Then giving this 0x10000000 layermask to the few meshes I want to render without postprocess.
It works, the only issue is that my cameras will not share their depth buffers, and the second is always rendered on top. Can I have them share it ?
(blue cube is rendered with no postprocess, surrounded by red balls)
Maybe there’s another way to do it ? I need to stick with the outline coming from postprocess (instead of geometric outline with mesh properties) because most of the outline constrast will come from lightning or textures.
I feel like just setting the sphere’s diffuseTexture to the rtt and not using a post process could be doable too? It also might be more flexible if you need to filter multiple variants of a mesh. Just a feeling only.
Thanks Jeremy, I’m afraid when I see playground 7HSBLA is having the same “issue” I have with outlines being made regardless of depth ^^’
I’ve made an attempt by rendering noPostProcess camera to another target, and feeding noOutlinePostProcess camera depth map into my post process, but without success so far, https://playground.babylonjs.com/#PS6MUT#16
I’ll retry later out from playground so I can have depth textures drawn in canvases for debug, thanks again and have a nice day !
Regarding this one I have a fix but that does not work with MSAA, so still having a look.
But in any case, I think we should make some changes in the engine so that it’s possible for the user to take control when clearing the framebuffer for a camera. This way, it would be an easy fix (mine is not so easy).
And that’s it! Thanks to pp.autoClear = false the rendering from the 1st camera won’t be cleared and the 2nd camera will just render over what already exists in the texture. And thanks to the _shareDepth(rtt.renderTarget) above, the 2nd camera will take the depth buffer coming from the 1st camera into account.
So you can choose between your solution or mine. Note that in mine you don’t have to create depth renderers and perform the check “by hand” in the shader, you can reuse the depth buffer used by the 1st camera with the 2nd one.