WebXR render canvas and streaming

I will try creating a playground this week to show how to achieve this, as it seems to be a needed feature.
Will keep you updated here

1 Like

Hi @RaananW, has there been any movement on this? Or can you give me a hint/suggestion as to how to switch the rendering context. I was called away to deliver some seminars so have not been able to work on it but was hoping you might have made some progress in the meantime. Sorry to hastle you about it but I have stakeholders hammering me to prove that this is viable - as always they want a solution yesterday!
I will start to tinker and see what I can achieve.

1 Like

Had little to now time last week. I will do my best to get it to work this week.

The right direction would be to render the desktop camera along with the webxr camera, one after the other. No need for an extra canvas.

1 Like

Ah great, thanks for that. Based on this comment I was able to get it working. Quite a performance hit at the moment though (expectedly) so I will look into that now and see if I can find ways to minimise it, eg reduced resolution etc.
Thanks for your guidance @RaananW

Update: I was able to get significant improvement by passing parameters to the second call to render the scene so as to not repeat the animations/physics, ie scene.render(false,true). I have other optimisations to do, such as with ADT, but I am now on track. Thanks again.

1 Like

Tracked using this PR (as it is very much related to that)

Just wanted to follow up (this is the second topic that is connected to this issue) - This PR adresses the issues described here. I hope it helps (and that it will be merged soon :wink: )

[XR] WebXR spectator mode for desktop VR experiences by RaananW · Pull Request #10768 · BabylonJS/Babylon.js (github.com)

2 Likes

@RaananW is it possible to make use of secondary views and MultiView optimization to make this more performant ?

I am working on multiviews already. I hope to have a draft by next week.

4 Likes

I have a scene which should really give multi view a real work out. You know it, and where it is. You can use it for your testing, if you want. A scene with just a sphere and a ground is not going to be adequate for this.

You should also give it it’s own thread.

1 Like

Thanks @RaananW, I have updated my project to 5.0 to test this and enableSpectatorMode is working. Is there a way to disable it once it has been enabled? I may have the need to toggle this on or off. Is it as simple as clearing the activeCameras array or is there more to it? Would it be possible to have a bool parameter on enableSpectatorMode to indicate enable/disable?

1 Like

hey @MarkM, yes, it is as simple as removing the second camera from the array. You can submit a feature request to have it as a mutable flag (i.e. - not the way it is now :-)). We always accept PRs :slight_smile:

Yeah I’d be happy to do a PR, I guess that would mean getting up to speed with typescript (rather than js). As a start I’ll add a feature request when the sun comes up.
I have a rather unusual set of circumstances; I live on a boat with a small battery bank and solar panel which can barely keep up with the (high powered gaming) laptop, let alone multiple VR headsets. Normally i could go to the library but we are in severe lockdown at the moment, but hopefully that will be lifted next week. Fingers crossed.

1 Like

Hi guys,

I red your whole discussion with a lot of interest. I am also looking for a way to stream an immersive-ar-session via WebRTC to a remote device. I’ve searched for days now, but haven’t found an answer so far.
I am working with three.js and WebXR - so I don’t even know if I am supposed to ask for any ideas on my topic here in this babylonjs forum; but I am desperate, so I want to give it a try :wink:

As far as I understood, it is not possible to simply capture the stream of the underlying canvas because the xrSession only uses the webGL context of the canvas rather than rendering to it. Is there any chance to capture what’s shown on the screen?

Thanks a lot for any answers in advance!

Best wishes

Michael

Hi @theweinzierl, welcome to the babylon forum, where we accept you just as you are, even if you you use a different framework :wink:

WebXR doesn’t provide you access to the camera data, so it is not possible to capture the real-world capture. The framebuffer used to render on top of the real world data is available, and you can technically “export” its data on each frame, but i assume this is not exactly what you are trying to achieve

3 Likes

Hi @RaananW firstly, thanks a lot for your fast response and tolerance :grinning_face_with_smiling_eyes:

That sounds interesting. Maybe I could export the data on each frame and send it via WebRTC to the callee. Sending the camera capture via WebRTC shouldn’t be a problem. I don’t know if putting together those two tracks on the callees side is possible. But let’s see…

Any hints how to export the framebuffer-render data? Is there a MediaStream accessible like when calling canvas.captureStream()?

Thanks a lot for your help, especially because of my amateur questions ;). Just starting to get into this whole topic.

Hi, sorry for the late reply to this, I have been on other projects and just catching up. This is perhaps another thing to try, if you haven’t already but its just a thought and i haven’t tried it. I have not used immersive AR yet as my project was immersive VR so this idea was not appropriate.
Can you let your engine of choice do the rendering and have your program capture the display via getDisplayMedia which, if it works, will give you a stream that you can send, ie do the work outside of the engine rendering process. MediaDevices.getDisplayMedia() - Web APIs | MDN

Again i dont know what this will return when the user is in AR mode. For my project, with the changes Raanan added i was able to setup an async routine that copied the spectator canvas to another canvas, at whatever framerate and resolution i wanted, and then captured the stream from that for webRTC.

1 Like

Hi @MarkM,

thanks a lot for your reply. Capturing the display was my first idea as well. But I am using a smartphone as ar device and unfortunately getDisplayMedia() is not supported on mobile devices as far as I know. Or am I missing something there?
I will do some testing with the other approach in the next few days and will let you know, if it will be successful. fingers crossed ;D

Ah, no you are right. Not available on mobile
https://caniuse.com/?search=getDisplayMedia

1 Like

Hi friends!

If you are interested: I am now using an inline session and the 3D model gets drawn directly to the canvas. The rendered 3D-Model does not behave as smooth as in an immersive session (guess the pose-ing stuff doesn’t work in inline session) For example it doesn’t adapt the size when you bring it closer. But positioning is possible so actually it fits my needs.
Then I simply overlay an video-element - to which the camera mediastraem is linked - with the canvas showing the 3D-Model. Both sources can than be streamed via WebRTC to an remote peer, where the same overlay is used.
To be honest this solution seems very fragile for me. But it is the best result I could achieve. I hope there will be some native routine in WebXR-API to share an immersive-ar sessions directly. As far as I understood the conception is in the making.

Thanks a lot for your help and excuse me for spamming this thread :wink:

1 Like