Check out this iteration:
The two objects that you are applying the same texture to do not have the same UVs. Babylon textures define 0,0 as the topleft of the image, WebGL and glTF deifne 0,0 as the bottom left of the image. Therefore, to be used between models authored in the two formats, the images need to be inverted. (the alternative to this is inverting the glTF model’s texture coordinates to the babylon UV format, which will dramatically slowdown load times)
When we want to use a texture directly imported into babylon, we interpret the texture data as inverted by default, in order to fit babylon’s directX like texture space schema.
As an experiment, let’s try to apply the texture from our gltf directly onto our babylon file:
Here’s an documentation page by @PatrickRyan that explains this disparity in depth:
Taking the exported scene into blender also demonstrates that these can’t really be treated as equivalent assets:
I would reccomend inverting the textures to be used on your glTF assets via cloning the texture (we use an internal datastructure to contain the actual texture buffers) this clone should just allow us another way to interpret the textures: