I’m playing with the new chrome beta and webxr, but I can’t work out how to put an object into or onto the AR scene so it will display.
I started with the playground here: https://www.babylonjs-playground.com/#WGZLGJ#937
and I’m running it on my mobile (Nokia 6.1 AndroidOne). When it starts I see the boombox on a blue background, after I start up the XR experience I see only the camera pass-through.
I’m breakpointing the code and can see that this scene gets three cameras:
the default arc camera
a target camera
a webxr camera
Is there particular way to say “draw it on this camera view”? Or maybe it’s just positioning and/or layers which is tripping me up. It’s also entirely possible I have misunderstood something very fundamental in the Babylon engine!
I’ve had a short break and realised that I probably want babylon to render on an entirely separate layer with a transparent background. I’ll do a search for that, but it’ll need another component: how can I link the rendering camera to the webxr camera so everything moves as one?
so!
Babylon supports the immersive-ar webxr mode, and it is working well.
I haven’t written the AR documentation yet and will do it hopefully this week, but here is a quick demo for you:
https://www.babylonjs-playground.com/pg/KVZI50/revision/56
A few notes -
- This is implementing the old hit-test API which doesn’t work on chrome 80 and up.
- The demo uses the background-remover feature, which automatically removes the backgroiund for you when entering AR and showing it back when leaving. no params are needed if you use the environment helper, but if you use something else, you have to define the background/skybox yourself.
- Plane detection feature works well, and should still work with newer browsers. But the APIs change constantly. so you might see an exception or two.
There are also two meshes that you should see when turning around. The color picker should work when you click on it.
If you have any questions, let me know.
Wow that is super helpful, thank you @RaananW!
I’ll test it on my phone and then rip it apart to see how it ticks
Ah yes, I did find this demo earlier today but I moved on when it didn’t run in the Chrome 81 beta. I’ve just run it again and it does look like it might be only the hittest system which is blocking it.
Do you have any time-scale in mind for replacing the legacy hittests? I’m going to have a go myself but the way it ties into the feature manager makes me hesitant to do a hack & slash job (one system, yeah for testing I can live with my mess… two systems feels like pushing my luck!)
The main issue is the fact that the API was changing rapidly, so it made no sense trying to tail the implementation.
Once the API is stable, we will implement it. You can follow this ticket - WebXR - Implement the new hit-test API · Issue #7364 · BabylonJS/Babylon.js · GitHub . I am certain I will finish it for the next release, probably during alpha/preview time already.
It seems like the changes are mostly: instead of querying a point and getting a result, now we have to register points of interest in advance and there’s a system to check them later and find out what was detected there.
It’s a more async approach and the new system permits the specification requirement that the implementation can decide to limit the number of points being tracked (to reduce finger-printing of the device or location I presume).
Thanks again for your help, and super fast replies. I’ll see how things go when I get a half-day to do some hacking around the edges - I think I might be able to build something ugly which gets the job done by feeding data to the legacy hittest code.
Sounds like a great plan
Let me know if you manage to get it to work and maybe share some code, if you can. would be great to share with the community!
And you are right - most changes are functions-movements and calls to different APIs, but - we as a framework need to fit all use cases. For example, if you noticed, there is an interesting debate regarding transient and intransient input, and how it should be addressed. That’s why it might take a little longer to develop.
So I’ve made a fair bit of progress but I’m hung up in coordinate systems now
I added an extra parameter to createDefaultXRExperienceAsync to permit us to pass any “requiredFeatures”: I’m passing “hit-test” for this experiment. That gets passed all the way down through the ui button system to the requestSession and it’s all working fine.
I also wrote an init function which requests reference spaces for “viewer” and “local” and triggers the session.requestAnimationFrame (called when an xrframe is available with a couple of parameters including the xrframe - very useful!) It adds a requestHitTestSource to the session, using the viewer space… this is my probe into the world.
Perhaps you can help with the final stage? I want to add an ico sphere at the hit-test detection point, to verify that everything is working properly. Unfortunately it seems that simply decomposing the matrix returned from webxr doesn’t give me values that are directly equivalent to Babylon. My sphere does appear to detect distance but it is about 160 degrees to the left and frequently moves in the wrong direction when I turn the camera. So… rotation + translation are using different orders, origins and/or handedness. I’m digging through Babylon looking for the transforms but so far my cut & paste snippets aren’t fixing this last problem.
Do you know or can you point me at docs for the Babylon coordinate system and the WebXR coordinate system, including any difference in the quaternions?
EDIT: I’ve just discovered the handedness switch on the matrix and it does seem to improve things significantly! I still have problems where the world origin seems to jump around as I try to paint my walls and floor with tiny ico spheres. There’s some code which multiplies the hit-test matrix by the worldParentNode matrix I’ll try that next.
Handness! And relativity
The current recommended AR mode is local, which sets the position of the headset not relative to the floor (as opposed to local-floor ). The reason I did it, is because the old hit test and plane detection worked better with local.
So - the hit test result is relative to the camera, which should be its “parent”, at least at the exact point it was added to the scene. afterwards, it should stand alone.
Hope this gets you further. And I hope we will find time in the near future to implement the new hit test feature