Render to few cameras in XR

Hi again!

Could you advice me for solve that task:

In ‘flat’ mode (screen) i use render in separate cameras for scene and UI. It’is works pretty good for many reasons.

But how we can replicate this experiens in XR mode?

XR Helper (is very good that we can use his) build WebXR-Camera (i guess this camera need just as parent-node for two eye-cameras for scene render.

Can i create second pair for UI in same XR Session? How?
Or it will hard task for headset device GPU?

Before, in VR-mode i linked UI with main scene camera and that solution born many problems for adjusting UI. The most obvious problem - we need different fov’s for scene and UI, but not only.

1 Like

That’s an interesting question!

The current architecture does not allow you to render another camera on top of the already-enabled 2 rig cameras.
GUI usually renders on top of a utility layer and actually uses a different scene. What was the issues you encountered? maybe it can be fixed instead?

Do you mean GUI 3D interface?

Yep, sorry, I was referring to the 3D GUI.

Want to describe the issues you had?

Is not real issue, i ask ideas for the better way )

In “flat mode” we have 3D interface that separated from real scene with LayerMask. All interface rendering in special UI camera over scene render. In scene cameras can swichting, flying, add or dispose lights or shadows all of this do not touch interface. UI Camera always still unmoved and all work ok.
I ask how we can do same similare but in XR session.

Every time when simulator needs XR and flat mode volume of work is grow significantly because including we need to adopt UI.

How is way better? )

You are describing a full screen mode for the UI, no? Something like an HUD? It’s like attaching UI elements to the camera itself.

Or am I misunderstanding?

yes, exactly.
Some like on picture.

UI in XR is a bit different (in general, not only in BabylonJS). This kind of UI would be an object on the screen that can be “touched” or pressed (or selected).
Having said that - you can attach the 3D GUI to the main XR camera, so it will always follow it. There might be an issue with rotation. A quick solution for that would be to only take the camera’s position and the y component of its rotation and apply it to the GUI. Would that work?

Yes, I have done that with lights on a camera wt standard materials. One problem that is solved in webXR with this is that units are always meters. much easier set a distance.

Also, I tried using a render camera, and these problem resulted

yes, i understand.
UI is attached to camera and ui elements is scene elements…
But!! interface often need some isolate from scene lights, shadows, postprocess, etc.
Many from it linked with camera.
As example - we always need see interface items, but scene can be dark or hyper-bright.
Also we can’t comfortable use glow, highlits, postprocesses for UI items separatly from other scene items.
It narrows the possibilities.

You can attach a BABYLON.PointLight, which requires no aiming, to camera using a layer mask value beyond the default. 0x10000000. Assign this layer mask to each of the meshes in your UI. Now all the meshes will be consistently lit, if in a scene.beforeRender you assign the lights position = camera’s. You will need a beforeRender anyway to position UI.

This light should not interfere with scene meshes. The default layermask for lights and meshes is 0x0FFFFFFF

Looking at my code more, while meshes will not be affected by the point light, you if are not doing PBR without any lights, then you need to exclude those lights from also lighting the UI.

for(const light of scene.lights) {
    if (light !== ui_light) {
        light.excludeWithLayerMask = 0x10000000;
scene.onNewLightAddedObservable.add((newLight) => {
        newLight.excludeWithLayerMask = 0x10000000;

Thank for you example, but i already use this solution.
How we can make some like that with glowlayer or shadow generator ? :wink:

Look, i dont told about any impossible, we can thin tune scene with tricks and haks, excluding, controll renderingGroups, separating lights and play with masks but in Flat mode we can do all of this with pair strings of code just start rendering for second camera (yes with other mask).

Question is: perhaps we can same easyly in XR? How?
Theoreticaly not barriers for create one more pairs of cameras and set same XRWebGLLayer for output as first pair? Or not?
If yes we will must only think about save perfomance and make UI element simple.