WebXR render canvas and streaming

Hi @RaananW and others,

I have a question regarding the render canvas and what happens to it when in WebXR. I am attempting to screen cast a user’s renderCanvas - which I can do via renderCanvas.captureStream(25) to get a stream from the canvas, which I then use in a call via WebRTC - and everything works fine while the user is in non-XR mode but when they enable XR the canvas freezes and does not update.

I have tried to set a specific canvas via outputCanvasOptions.canvasElement and also by replacing the stream when switching to XR with xrHelper.renderTarget.canvasContext.canvas.captureStream(25) but again it is blank (I think this is the same canvas anyway).

Is there a way to capture a stream of what the user is seeing in XR? I suspect it is via a canvas.captureStream() but maybe not. Or maybe a setting in outputCanvasOptions.canvasOptions but I can’t find any details on that as the documentatin doesn’t lead anywhere.

Intersetingly, when I enable XR on a PC using the WebXR emulator plugin everything works as expected, it just doesn’t work when on an actual device.

Any ideas? Need a PG (not sure how I will do this yet as many moving parts)

1 Like

Hi Mark,

WebXR is not using the canvas directly to render the data to the device, but a frame buffer that is being updated instead of the canvas on screen (while still using the canvas’ gl context).

There was an entire dicussion over mirroring here - What happens if a canvas with an xrCompatible context is attached to the DOM? · Issue #603 · immersive-web/webxr · GitHub . Once the canvas is marked as XR compatible, each browser deals with the context (and updating it) differently.
The WebXR examples page does have a spectator-mode demo (Spectator Mode). I have considered making this a babylon feature, but this has some performance implications which I believed (at least back when I considered it :-)) will make this unusable is complex scenarios.

The best way to stream the user’s headset is to use the device’s native mechanism to do so. Every device has some form of a streaming feature that can be used in this case (i.e. - an application that shows what the user is seeing).

Thanks for the reply.

Unfortunately, I don’t think using the device’s native mechanism will work for my use case. I have multiple users/devices which are streaming thier vews to a central monitoring peer, kind of like a zoom session, using PeerJS and WebRTC. For ease of management and simplicity I do not want to install/configure anything on the client end, just using a browser.

I am now looking into multiple canvases and views but not getting very far yet. I have created a working canvas to create the engine with and then attempting to register views to my main renderCanvas and spectatorCanvas. But something is getting messed up with other parts of the scene(s). It seems like EngineViews are not tied to a scene ??? so wondering if I can have a view defined on the engine which will render whichever scene I am in. Maybe this is one for @Deltakosh.

Hi @Deltakosh, any ideas on this? I am trying to have a spectator view while in XR, ie another camera rendering to another canvas that can be used as a stream.
I tried creating the engine with an engineCanvas and then did registerView(renderCanvas) followed by registerView(spectatorCanvas, scene.spectatorCamera) but it is not working. Maybe I am not understanding what happens with views/scenes/cameras and canvases in XR.
I will try some more combinations.

Update:
I think I may have just figured it out. I still have to test it on a real device but the signs are all good. I will test tomorrow (getting a bit late here now).
What I did was define the engine with renderCanvas as per usual then defined a view
engine = new Engine(renderCanvas, antialias, engineOptions, adaptToDeviceRatio);
let view = engine.registerView(engineCanvas);
engine.inputElement = renderCanvas;

and in css set
#engineCanvas {
position: absolute;
z-index: -10;
width: 100%;
height: 100%;
}
#renderCanvas {
position: absolute;
z-index: 10;
width: 100%;
height: 100%;
}

then when I enter XR I switch my stream from renderCanvas to engineCanvas. (Actually I could probably make the stream allways engineCanvas and not bother switching). Anyway looking promising. Will follow up tomorrow, it may not work on an actual device.

1 Like

Update:
No it didn’t work on an actual device. I also tried adding a camera to the scene, which is updated onBeforeRender to match the position and rotation of the active camera, snd then changing the view to
let view = engine.registerView(engineCanvas, spectatorCamera);
but that killed the XR canvas, ie nothing showed in the headset, I think probably because the view messed with the activeCamera during rendering.
Not sure what else to try now. Conceptually it is simple, another camera to render to another canvas using a view but without messing with the XR view/session. Practically perhaps not so simple…

1 Like

Update:
I think thhe basic problem is that WebXR marks the canvas as XR Compatible (as mentioned by @RaananW) and no longer draws to it but uses its context (presumably making some alterations). The views then attempt to render to this canvas (with its context) and copy it off to the appropriate target context. So both WebXR and EngineViews are using the canvas/context that the engine was created with. Not seeing a way around this.

Hi Mark,

you should be able to provide your own canvas (and canvas context) but the rendering context will switch to it. Have you tried overriding the xrCompatible call? does it then mirror to the canvas?

Thanks @RaananW. I had fotgotten about that. I did try providing my own canvas early on without success but now that I have a better understanding of what is happening with canvases/contexts etc I will revisit this. I will also try overriding the xrCompatible call (when I find it).

BTW, I meant to attend the Web XR developer summit but it started at 1am local time … and I forgot about it (a theme happening here). I went to join the tail end of it and just missed your presentation :frowning: Oh well, hope it was a productive summit for you.

Update:
setting
outputCanvasOptions : {
canvasElement: this.XRcanvas
},

when creating the XRExperience did not work, either with a canvas I created dynamically or with one created beforehand in the DOM. But I noticed in DevTools that the canvas did not have a context when in XR, ie if I called getContext with any of 2d, webgl, webgl2 then I got a context but subsequent calls to getContext with a different id give null. This leads me to think there wasn’t a context until the first call. In my code I tried then tried this.XRcanvas.getContext(‘webgl2’) to make sure there was a context and seemed to break the XR experience. I feel like I am on the right track but not quite there…tomorrow I will try getContext(‘webgl’) and then try overriding makeXrCompatible()

Thank you so much for the constant updates!

I really want to try finding the time to debug this with you, But I am swamped as usual.

That seems off. The canvas context should be used for rendering to the framebuffer. If it doesn’t we need to check what we are doing wrong.

If this is the case, and the newly added canvas (not the “desktop” canvas) is not being used, it’s a bug on our side. If that’s really the case and if you can, please file an issue on github so we can track it. You can reference this conversation.

Oh, it was 12am local time for me :slight_smile: I can totally understand why you didn’t attend.
I doubt I renewed anything to babylon veterans, it was more me trying to explain a lot of three.js developers how it can be simpler :wink:

Update:

Thanks @RaananW, I appreciate the input. I will try and gather as much info as I can and investigate things as much as possible before I create an issue on github. I will keep updating here as I go (and maybe try create a PG for it). I fully appreciate you are swamped so I am not expecting you to investigte but any guiding pearls of wisdom would welcome if anything occurs to you.

I just tried on a different device (Quest 1) with the same results and can confirm the XRcanvas is not getting context set. In Dev Tools, after the XRSession is created, I can do a XRcanvas.getContext(‘2d’) and get a CanvasRenderingContext2D returned…which shouldn’t happen.

I also tried to preset the context in my code, prior to creating the XRSession by doing
XRcanvas.getContext(“webgl”, {xrCompatible: true});
but this had no effect - same with “webgl2”

Frustratingly, when I inspect XRSession.renderState.baseLayer in DevTools there is no context property. When running in WebXR Emulator on desktop there is and I can see it set to the context for XRcanvas.

Update:
might be onto something. Just prior to enteringXR I check the WebXRDefaultExperience.renderTarget.canvasContext and find that it is the context for renderCanvas (my original canvas) and not the alternate XRcanvas I create the experience with.
I will follow this lead as to why the context is not set…

… I believe the problem is this method on WebXRSessionManger. It looks to me like it overwrites the canvasElement - unless the engine rendering canvas is altered elsewhere

(sorry code formatting never works for me, even if I add the 4 spaces)

public getWebXRRenderTarget(options?: WebXRManagedOutputCanvasOptions): WebXRRenderTarget {
const engine = this.scene.getEngine();
if (this._xrNavigator.xr.native) {
return this._xrNavigator.xr.getWebXRRenderTarget(engine);
} else {
options = options || WebXRManagedOutputCanvasOptions.GetDefaults(engine);
options.canvasElement = engine.getRenderingCanvas() || undefined;
return new WebXRManagedOutputCanvas(this, options);
}
}

Update:
Ok, I have found a couple of potential problems. I have done a quick hack of the js files in the @babylonjs/core/XR folder (not ideal I know but ok for investigating). I changed the line 104 of WebXRSessionManger (bold line in previous post) to
options.canvasElement = options.canvasElement || engine.getRenderingCanvas() || undefined;
and verified that the correct canvas was used as specified in outputCanvasOptions when calling createDefaultXRExperienceAsync. The returned helper has renderTarget set correctly.

However it still wasn’t working so I added console.log prints to getWebXRRenderTarget and found that is wsa being called by enterXRAsync without pasing in options so the render target became the engine renderCanvas again. I notice that there is a third parameter to enterXRAsync which is a renderTarget and if I set that to xrHelper.renderTarget then it works. I believe this line should return the renderTarget property of the helper rather than call getWebXRRenderTarget again.
if (renderTarget === void 0) { renderTarget = this.sessionManager.getWebXRRenderTarget(); }

So my situation now is that I can keep the stream running on renderTarget with the camera responding to the pose of the headset … but the render into the headset is broken. Must be something else a bit dodgy. Close.

I noticed that the XRCanvas has a style override width 2372 and height 1208 (which is presumably set somewhere in the process). But the webgl2 context attached to it has drawingBufferHeight: 150
drawingBufferWidth: 300
maybe I am missing something that sets the drawing buffer properly??

Probably no. The height and width are set by the render buffer itself, and not by the context:

Babylon.js/webXRSessionManager.ts at master · BabylonJS/Babylon.js (github.com)

So it should take the correct height and width when rendering to XR.

The canvas width and height are being adjusted here:

Babylon.js/webXRManagedOutputCanvas.ts at master · BabylonJS/Babylon.js (github.com)

But this should not interfere with the actual rendering.

This is clearly a bug. We should not override the passed canvas element. I will fix that asap.

Only if you don’t override it yourself. The default experience helper does pass the requires options:

Babylon.js/webXRDefaultExperience.ts at master · BabylonJS/Babylon.js (github.com)

In turn it passes to the enter/exit UI which calls enterXRAsync with the initialized render target (including the options).
You are probably referring to this line:

Babylon.js/webXRExperienceHelper.ts at master · BabylonJS/Babylon.js (github.com)

which takes an option-less call to this function as the default (and overridable) value. The reasoning behind this is that the options don’t exist when you call this function yourself, unless you generate it yourself.

And this happens when you are reading the stream in this render target?

Ok, that makes sense. I have the default UI turned off and am calling enterXRAsync directly so i guess it is (rightly) up to me to pass in the renderTarget.

Oops i was a bit unclear on the bit about the stream and current situation, i meant renderCanvas. The engine is created with renderCanvas and i get a stream from that which i send out via WebRTC. This all works fine and now continues to work after entering XR, ie the reciever still gets the stream which reflects the pose of the headset. The XR experience is created with XRcanvas and this is now correctly connected to the renderTarget when calling enterXRAsync…but the headset displays a blank environment and a generic not working error.

I assume you will have to render twice - once to the webxr frame buffer, and once to the desktop canvas. It has perf costs, of course.

I pushed a change to allow options.canvasElement to be used (exactly what you suggested), which will allow you to do that. The engine’s render loop should run twice - once for the webxr context, and once for the desktop canvas. It will require changing the active camera each time you change the render context though.

Ok, thanks. I didn’t get to work on it today and I am not sure yet how to change the render context for the render function in the loop. I see Engine has a getRenderingTarget but not a set. Do you mean to manually set engine._renderingCanvas or is there more to it?

I understand there will be a perf cost but I might be able to tweak things for that, eg dropping the resolution/size of the nonXR renderCanvas for the stream (doesn’t need to be high quallity) but I am not sure how much impact the size of the canvas has on the performance cost of rendering.

Hi @RaananW, can you give me any hints on how to switch the rendering context to render with the one passed in thru the canvas? I know i can push another renderfunction onto the loop but not sure how to switch contexts.