Multi-Camera / Multi-scene setup

I have been playing around with multiple camera and scenes, read the tutorials but something seems challenging in my demo setup so I’m looking for ideas.

Basically I would like to have a normal 3d “layer”, rendered with a particular camera, and a UI “layer” rendered with another camera.

I tried using layerMasks, but my problem is that’s only available for meshes, and I want to expose custom add methods for these 2 layers that can accept any possible children later, something like:

const sphere = BABYLON.MeshBuilder.CreateSphere("sphere", {diameter: 2, segments: 32}, scene);
sphere.layerMask = MAIN_LAYERMASK;

const plane = MeshBuilder.CreatePlane('defaultPlane', {}, scene);
plane.layerMask = UI_LAYERMASK;

// in the render loop:

In this case, I would like these add methods to accept TransformNodes as well, which may have nested children, some TransformNodes and some Meshes, so for this to work I would have to traverse those nodes and apply layerMask to each Mesh instance (also if someone adds additional meshes to these parameter objects after using my layer.add() method, I have no way of knowing about that, so it wouldn’t be getting the proper layerMask value and wouldn’t render with the correct camera).

So I was thinking the other option is to use 2 scenes, a UI scene and a Main scene, with 1 camera for each and render these after one another. The limitation here is these scenes cannot share anything between them (meshes, nodes, materials), and the users of my custom layer.add method have to create the objects they want to add with the appropriate scene - it will not work if they create a mesh for the UI scene and add it incorrectly for the main scene, especially because the scene of a node cannot be changed once its created.

Am I thinking about this correctly? Is there anything I may have missed?

I think you need overlay multiple scenes for rendering :smiley:

One way of implementing this is, as you say, having multiple scenes and render them one after the other.
But you did find the problem - reusing data between scenes will be relative hard to achieve.

Another suggestion would be to use a multi-camera system, and enable/disable meshes (or nodes) according to the current camera rendering. Something along the lines of this:

Notice how I choose which box is rendered inside the onBeforeCameraRendered observer. This way you can choose which mesh should render and which shouldn’t. This could be much more optimized, of course. Just a suggestion.

You can also use the layerMask property of the camera / mesh to make some meshes rendered only by some specific camera(s):

1 Like

Thanks guys!
I went with e thlayerMask approach as that works almost “out of the box” and doesn’t require splitting resources into 2 stages. It seems to work good for now.

1 Like