Physic template for WebXR camera (WebXR/Physic/Camera)


In one of my questions, I was shown this playground

I dug into it and really liked how the physics of the camera works in this example.

Can the same physics be done for a WebXR camera? With 6DOF work saving and MOVEMENT feature?

This sandbox is made by syntheticmagus. Here is a link to the original post Moving camera horizontally - #7 by syntheticmagus

I’m not 100% sure what you mean by “6DOF work saving,” but the short answer is that it should be possible to do this same sort of thing for XR, and in fact I do plan to create first-person character controllers, including for XR, as part of next year’s continuing golden path efforts.

However, making an XR character controller won’t be done in a single step as there are quite a few other technologies that need to be in place for that to make sense. Thus, an XR character controller probably won’t be a part of even the first several golden paths I work on next year because other challenges (site integration, content reuse/package logistics, environment physics, etc.) will need to be tackled first. If you want to help make sure this becomes a priority as soon as possible, though, keep an eye out for golden path-related topics on the forum and help make sure we know this is something the community wants. Looking forward to it!

1 Like

Thanks for the answer!

I mean the following:
Your playground has a code like this

scene.onAfterPhysicsObservable.add(() => {

This means that the camera will copy the position of the physical sphere.
While I’m in regular 3D, without a helmet and controlling the camera from the keyboard, this solution looks very cool

6DOF Problem
But when I put on the VR headset, everything is different.

If my WebXR camera keeps copying the position of the physical sphere then 6DOF will stop working. I will move my head up/down, left/right, forward/backward, but because WebXR camera will copy the position of the sphere, 6DOF will seem to be off

Standard Ellipsoid Problem
A standard camera ellipsoid works well, but with gravity turned on, things get bad.
You can see it here. Try to step on the cube and the camera will jump back
The same camera behavior occurs in XR mode.

I really liked your version of camera collisions and I adapted it to my camera app from the first view.

Golden Paths
I understand that you will be working with the whole team in the golden paths. It is very cool! I will be very happy to take a ready-made solution and apply it.

Temporary solution
As far as I understood from your answer, it will not work just to take and transfer the logic from your playground example to a WebXR camera.

In this case, is there any solution to the problems I have described now? Maybe something temporary and not working perfectly, but something similar to your example from the playground?

Hey, sorry for the delayed reply.

I believe you’re correct, that the physics approach used for a conventional first-person character controller is completely incompatible with an XR first-person character controller. I don’t think that’s a bad thing, though; XR interfaces in general are a very different problem from screen-centric interfaces, and in my opinion they should be handled separately for that reason.

Creating a physics-enabled XR first-person character controller is definitely something I want to do, ideally in association with a golden path, but unfortunately that’s still quite a few steps down the line as I have to address screen-based first-person controls (among other things) before that. If you need to get out ahead of that, I can list some of the guiding principles I’m expecting to use when I make my version, but do take these with a grain of salt as I haven’t actually built the thing yet. :wink:

  • In my opinion, the #1 inviolable rule of XR controls is that you should never—never—directly set the position on an XR camera. The full rationale for this is complex, but the short version is that the direct position (and rotation, etc.) on an XR camera is by definition controlled by the XR tracking system, and you should never have two disjoint systems (i.e., XR tracking and game logic) trying to modify the same value. Babylon allows you to do this for legacy reasons and to allow maximum flexibility, but in my opinion using this power is almost always a mistake. Instead, the correct way to control an XR camera is to parent it under a transform node, then articulate the transform node with your game logic while the camera itself (within the transform node’s subspace) is articulated by XR tracking logic alone.
  • The physics relationship between the XR real world and the virtual world is tricky. In ordinary screen-based character control, the character can be genuinely considered a physics entity in the virtual world, so all the physics can readily be simulated in one system: the character can affect the world in the same way that the world can affect the character. This is not true in XR: while the XR character (i.e., player) can affect the physics of the virtual world, the virtual world cannot move the physical world, and attempting to suggest otherwise will almost always cause discomfort. For this reason, I recommend separating XR physics simulation into two distinct systems, one for moving the virtual world and one for moving the player. The player’s interactions with the virtual world are easy: kinematic physics imposters on hands, head, etc. will give you a very plausible ability to make the virtual world respond to you. (It may be better to use a spring-tethered non-kinematic alternative, but I haven’t explored that yet.) Making the player seem to respond to the world, by contrast, should not be done using physics as-such because doing so can cause discomfort and nausea. Instead, I recommend making bespoke systems to handle things like falling and collision resolution; that way, you can take comfort-focused actions to resolving problems with player movement without having to worry about whether those solutions are physically accurate.
  • Another big problem to be mindful of is conflict resolution between the real and virtual worlds. Suppose, for example, that an XR character picks up a virtual pipe, grasping with both hands half a meter apart on the pipe. In XR, this will be represented by the virtual hands locking onto the pipe 0.5 meters apart, after which the user will be able to manipulate the virtual pipe using hand motion. However, in the course of this manipulation the user’s hands move erratically because they are not holding an actual pipe, and soon the real hands are 0.7 meters apart without ever having released their grip. What should the virtual hands do? Should they slide along the pipe? If so, which hand (if either) should remain stationary relative to the pipe while the other hand moves? Or should the hands both remain locked onto the pipe? If so, which virtual hand (if either) continues to track its real counterpart while the other virtual hand becomes disassociated? Or should one of the hands simply detach from the pipe? Which hand? Or, if the pipe is breakable, should it break? Resolving these sorts of conflicts between the real world and the virtual world will be a key part of maintaining comfort in an XR character controller, and the “right” answer may vary from application to application. Thus, I don’t really have any strong recommendations for this one; it’s probably best for you to consider your scenario carefully upfront, discern what kinds of real/virtual conflicts are possible, and decide on what resolution mechanisms are right for your app.

So… Unfortunately, no, I don’t know of an easy temporary approach to getting a working, comfortable physics-enabled XR character controller. :upside_down_face: Making such a thing is very doable and I definitely want us to have one, but developing it will probably take a good bit of effort and time. If you do want to move forward with making your own in the immediate term, hopefully some of the above considerations will be helpful; and if not, then hopefully they at least helped clarify some of the issues involved when we come back to this a little down the line. Best of luck!

1 Like

Thank you very much for such a detailed answer!

This seems like a very fundamental big problem.

I hope you can implement this in golden ways.

But so far this is not the case, I will try to find some kind of solution, and if it is more or less working, I will try to share it here on the forum

1 Like

hi @Nawar - The WebXRController was initially written with physics in mind, so I was able to walk up stairs and fall down edges. The first commit was a proof of concept - the “physics” was just a lerp - it wasn’t integrated with the physics engine (and didn’t use an impostor, gravity (velocity, direction) or even collisions). I just used a ray pointing down and moved the camera accordingly as a simple way.

The current WebXRController takes advantage of the collisions and functionality of the base camera.

In my test playground I was moving up stairs and falling off edges.
Here is a diff on the original PR and the merged one:
Comparing a0492ca0060ad3bbb2e6d59f83c420f7bb1c8a03…a9c4435cadf2d06390f6c55b646a74fbfaae69cf · brianzinn/Babylon.js (

Maybe that gives you some ideas? Cheers.

edit: I think you can hook to xr frame observable to mimic some basic physics functionality in this manner. suspect you can hook up yourself as well manually as an impostor.

1 Like