Issues importing GLTF models in a WebXR (AR) Immersive Scene

Hey there!

After several hours trying to understand what’s wrong with my code I’ve found that there might be an issue with the GTLF importer?

The model loads and renders perfectly fine, however, when I am to instantiate it in the scene, the spawn happens in the wrong place - seemingly symmetrically in the X-axis in world space, in relation to the Hit-Test. The test logs position X, the instantiated mesh logs position -X, exactically symmetrical to where it should’ve been placed.

I’ve reached the conclusion that this is a GTLF specific issue because I tried importing the exact same model in OBJ and it works as it should, no issue what so ever.
I do prefer to have my workflow with GLTF, though, especially when the asset has animations/a rig to import - which OBJ doesn’t support.

I’m importing my assets from Blender (latest stable release - 2.91.2), using the default/built-in GLTF exporter (default settings, only set it to export just the selected objects).

In the docs there’s this info (from this page - Blender to BJS, using glTF | Babylon.js Documentation):

To help transforming, note that the BabylonJS loader will automatically set glTF assets as children of an object:

  • named __root__
  • rotated by default to 180° on Y axys
  • scaled on Z by -1

Which makes me wonder if the issue stems from those pre-built options/transformations? Especially the ‘root’ stuff…

Any suggestions on how to properly export and import GLTF from blender to BabylonJS?

Hi VisionsMind,

It’s a bit hard to tell without a Playground, but this does sound like it might be an effect of changing handedness; @Drigax might know more about that. Is there any way you could show us either the model or your placement code (or ideally both) in a Playground? That way we could look at it directly and have a much better chance of giving sound advice. Thanks!

1 Like

Thank you for replying!
My setup is within a VueJS app, so I can’t share in a playground demo.

I’ve recorded a video from my phone showcasing the issue (link from OneDrive):
https://1drv.ms/v/s!AjstMy_m0nNsjIZDl5dKb2CwfAMm4w?e=VTeKhz

1st shot → (model with no textures) OBJ file of the model, nothing changed in the code, just the file format
2nd shot → (textured model) GLTF file of the exact same model, same code again.

As you can see, with the OBJ model everything work exactly as intended, when using the GLTF model there’s a mirror in the in the Global X-axis in relation to where it’s supposed to spawn and loads a bit slower (but this might be inherent to the info in the file vs OBJ).

Just to reiterate, in both cases the only difference in everything from the model (transformations etc) to the entire code is strictly solely in the file format of the same model.

The test logs position X, the instantiated mesh logs position -X, exactically symmetrical to where it should’ve been placed.

That sounds expected…without knowing exactly what you’re doing. The mesh is no longer in worldspace, its in our transform space after performing the coordinate system transform from the Right handedness of glTF to the left handedness (by default) of Babylon.js.

You should now be manipulating the mesh via its top level object, else your transforms won’t be in the space of your scene.

1 Like

Hum, that could be something indeed!
As I’m manipulating visibility, etc, I’m saving the mesh parts (there are 2 in this case), parenting one mesh to the other, and manipulating the parent mesh.

Any suggestions as to how to implement these manipulations to the ‘root’ specifically?

if you’re parenting one mesh to the other, the parent mesh should still have some top level parent in scene, right?

Again, its hard to know what you’re doing without a playground scene that we can look at.

I would think so, but that’s what I’m using to control its position - the parent mesh.
But does that account for this ‘root’ parent? How would I get access to it?

Did you see my other post with the screen recording showing this issue?
I’m sorry but this is running on a VueJs app, I’m afraid I can’t get that into playground to demo.

Here is the excerpt of code where I deal with the file loading:

async meshLoader(assetName = 'character') {
    //Mesh Loader
    await BABYLON.SceneLoader.ImportMeshAsync('',assets/',`${assetName}.gltf`,this.scene);
    const charMesh = this.scene.getMeshByName('character');
    const tokenMesh = this.scene.getMeshByName('token');
    charMesh.addChild(tokenMesh);
    this.character = charMesh;

From this point on, I’m dealing only with the this.character property.
Again, nothing but the file extension changes in any of the code.

exactly, if you try to manipulate this.character, you’re in right handed space, not your scene space. I’d suggest traversing up from charMesh to get its top level parent, and assigning that to this.character instead.

As far as creating a playground, you can more or less copy-paste the relevant model loading and initializing code to the playground scene without too much hassle.

Alternatively, you can also just use a right handed scene via scene.useRightHandedSystem = true

1 Like

That’s awesome! This clarifies a lot!
Just tested it out and using the root node to tranform wasn’t enough (still had the exact same issue).
Converting the scene coordinates system into the Right Handed worked perfectly!
Thank you so much!

Apparently it’s not completely out of the issue yet :frowning_face:
With the scene set to Right Handed, the shadows from a direct light simply don’t work and materials react a bit weirdly.
Turning Right Handed System to ‘false’ again, lights and shadows react the way they should again.
I also tried to adjust the Directional Light angle to accommodate for the axis change, but didn’t work either. Is it possible that the flip in the X-Axis turned the meshes inside-out? Any suggestions?

Can you make a simple playground that has a lighting setup that can repro this? That would make it easier for @Evgeni_Popov or I to take a closer look

1 Like

Hey there!
Thank you for helping, again!
So here’s a quick test using the demo that’s in the Docs, and has the exact same issues when using the Right Handed System:

I’m running this on the latest Android Google Chrome browser release from the Google Play Store.
So, if you run both of these demos on your android phone, you should notice a considerable difference in the materials in the scene and the missing shadows that are gone in the Right Handed System version of the demo.

Can you see the difference in your end? I’m suspicious that it might be something related to the generated geometry in the Planar Detection/Triangulation against the Right Handed System coordinates switch, but I have no clue really. Any suggestions?

BTW, let me know if I should move this to a new context in the forum (bugs section, a new question, etc).
IDK if it’s getting too Off topic from the original post/title, although it’s a related issue.

This thread seems to relate to a similar or related issue: Another gltf import question - #9 by derelict

From reading that thread I get the sense that importing GLTF and using files is a bit of a hassle.
This ‘sideOrientation’ parameter has barely any documentation, this is all the info there is be it in Material, MeshBuilders, etc.

sideOrientation: number
Stores the value for side orientation

So, yeah, alright we know it’s of type number, and kinda vaguely what it is/does. Any info on how to actually use/set/get this parameter?

I am not sure anymore if it is a GLTF issue, right handed system issue, or something else :slight_smile:

Would you be able to share the model? Could we experiment with it a bit to understand what is going on?

1 Like

Thank you so much, again, for helping!

I’m currently not at my office, I’ll see if I can make a quick test model with the exact same setup and upload it here ASAP.

Meanwhile, since my latest update on this thread, I’ve spent some more time experimenting with the “alternative” solution of using the root mesh that the GLTF importer makes instead of turning the Right Handed System on. Using the ‘root’ to make transforms kinda works. Although, in my opinion, it makes the content production workflow cumbersome, as it happens the same way with the demo of the Docs - the model is with the wrong orientation - the back facing the camera instead of the front of the character. Having to model and rig assets facing backwards (180º in the Z-axis) so that these appear facing the right direction in Babylon is not ideal.

As per the Right Handed System check solution - it doesn’t seem to work properly with the the Demo of the Docs either. It seems that it’s probably causing the Planes Triangulation to be generated inside-out? I’m not sure, but the materials react weirdly and the shadow caster isn’t able to render any shadows over the triangulated/tracked planes.

Can it be reproduced outside of XR? are you refering to the polygon objects constructed from the plane geometry data?

The Right Handed System seems to work alright with non-XR scenes, I’ve tested it with this Directional Light Demo already: https://playground.babylonjs.com/#2898XM#3
Adding the line:
scene.useRightHandedSystem = true;
Right after the scene object is instantiated seems to work fine, it just naturally flips the scene on the X-Axis, everything seems to work fine - materials, shadows, light.

Yeah, I’m referring to the polygons generated when ARCore tracks planes in the surfaces - from the WebXR Demo in the Docs. ( in this earlier post, this issue is reproduced using the playground demo - Issues importing GLTF models in a WebXR (AR) Immersive Scene - #13 by VisionsMind)

1 Like

There seems to be an issue with right-handed plane-detection. I will tackle that when i find the time

1 Like