So I want to make a simple scene where I can walk around AND use my glasses (so 2 cameras, one for each eye)
From github I see we we need to choose “immersive-vr” or “immersive-ar” .
Is this a technical limitation I’m not aware of?
I feel like, if we know the camera’s position and rotation from “immersive-ar”, we would be able to, in the worse case scenario, render “immersive-vr” with that info
WebXR asks you for the type of session you want to start in order to make sure your device supports this mode. So yes, you will need to provide one of them. On certain devices AR will also provide the camera feed, which you might not want in a VR scenario.
It’s not possible to start both sessions at the same time.
What glasses? AR glasses like the hololens? VR glasses like the quest? What other features do you want to use? Do you care about detection of walls? do you want to place something on the floor?
It all depends on your device capabilities. A phone is not capable of rendering 2 eyes for AR, only for VR. The hololens (for example) can render 2 eyes AR experiences.
AR requires the app to show you “what you see” as a background. The phone is not able to simulate your eyes, it has a single camera it is using for AR scenarios.
That’s because the device doesn’t support tracking.
If you want tracking in VR you will need to use a device that supports it. Those are hardware limitations that we sadly can’t help with