Can BABYLON.WebXRHandTracking be instanced prior entering imersive mode

I am attempting to integrate hand tracking into my headset based user interface. Am doing this solely using the generated API docs. Getting WebXR up & going is an asynchronous process, and where to put the call to instance it could go in multiple places / times. I just wanted know in advance what is the earliest, so I only have to design once.

The earliest would seem to be in the callback to create the “Experience”, since that is when the sessionManager is first accessible.

BABYLON.WebXRDefaultExperience.CreateAsync(this.scene, options).then((defaultExperience) => {
    new BABYLON.WebXRHandTracking(defaultExperience.baseExperience.sessionManager, options);
});

But I am wondering if doing it there, which is before entering immersive mode is going to cause issues? On Quest, I needed to turn tracking with chrome://flags. I do not want to be chasing my tail by doing it in a place before it is supposed to happen on a platform where it is still an experiment. Thanks.

Ok, I knew there had to be at least one pg. Found like 5. They are turning on via feature manager. The top one I tried, https://playground.babylonjs.com/#6QR8DC#4, did not work, although the controllers were not there anymore.

Seems like this is in the “kit stage”. I grabbed the 2 GLBs from “https://patrickryanms.github.io” for study. Probably going to embed either these or something from MakeHuman using my Blender source code generator. Makes me nervous using loads for “core geometry”. If my work flow has strengths, it would be foolish not to use them.

Different question, is the a currently working PG for quest?

Loaded one of the hand glbs into Blender and I got something I was not expecting. A skeleton like this with none of the bones connected. There are reasonable weights, but just checking, @PatrickRyan is this what you built, or is the Blender importer turning it into dogfood?

Edit, since all the bones are parented directly to root bone, I am guessing this was in-fact intentional. Now betting that translation data provided by device is relative to root. Guess I’ll need to read the API.

@JCPalmer, yes, this is correct. The tracking data we get from the camera contains translation and rotation data for each bone in relation to the scene origin. This means we leave the skeleton root at the origin and the rest of the bones are offset from there.

If we were to use a traditional skeletal hierarchy, the bones would take their offset from their direct parent but be fed a position from the origin of the scene and would produce nightmare fuel.

This is definitely an unusual workflow and meant you can’t skin the mesh in a traditional way. I had to have one skeleton and mesh with a traditional parent structure to deform in scene and paint weights and a second unparented skeleton and mesh to transfer the weights to for export.

2 Likes

Yeah, I guessed that in my edit of the previous post, with everything parented to root. Thanks for confirming. I am kind of worried about using a GLB as a delivery vehicle, though, whether embedded or not.

GLB is right handed, and I am pretty sure that I do not want to try to mix that into my stuff, if you even can do that. Now it may be that the coordinate system data being provided by the device is right handed, making convenient to use GLB, but that might make using this orientation dependent.

To enable the feature use the features manager instead of creating it yourself (just like any other WebXR feature). This way you can create it before the immersive session started, and it will start automatically when you start the session

1 Like