Ray tracing for nested meshes

Yes.

Yes, except the bounding box is only recalculated based on the positions, not the normals.

No, normals are not automatically (re)computed if you update the positions. In scenario 2, the normals were probably the same than before.

Thanks for clarifying the same.
Few points :slight_smile:

  1. Normals by definition is perpendicular to faces right!? So, when vertices changed, they should be. And as I thought when I checked normal heatmap in the sandbox, it got updated when I exported asset without updating normals just with vertex data only. Any thoughts?

  2. Okay, got it bounding box only recalculated based on the positions, so when they change the bounding box changed as well.

So, correct me If I am understanding this wrong then, refreshBounding info will never update the positions to the original mesh and to do so, I have to get new vertex data and set it.

getPositionsData() with true/true, this basically gets me the vertex data after applying morph, which I then set it as default for the mesh.

How does normals make a difference in this whole story!?

I mean whats the correct way to basically make changes to morph targets and properly export a mesh with the applied data!

Thanks a lot for the info!

Not necessarily, as the normals we are dealing wih are normals at vertices, not faces. They could be calculated as the average of the normals of the faces a vertex pertains.

Itā€™s not automatic. When you change the vertices (by calling setVerticesData, or setVertexBuffer), the normals wonā€™t change automatically. Itā€™s up to you to recalculate them (you can call mesh.createNormals() if you want the system to do it for you).

Yes.

Actually, the first true indicates that the skeletal deformation (if any) should also be applied. Morph is the second true.

Iā€™m not sure to understand this one. normals are also modified by morph, so if you want to bake the deformation of a morph to your mesh, you must get the modified positions with getPositionsData and the modified normals with getNormalsData.

Above, I have provided a PG that does just that:

2 Likes

Sounds good, But I do think there should have been a function to bake the scene with the applied morphs. Is there?

Just asking is there any? Also, when we pass true/true you mentioned first true represents skeletal deformation. What do you mean by that? Is there any other way that we can deform the mesh. Because right now we are trying to fit with blend shapes/morph targets only.

I tried to replicate the same way you baked the morph targets without calling refreshBoundingBox, strangely the bounding box didnā€™t got updated. :frowning:


The above piece is not updating the bounding box of the mesh.

Here is the playground for the same :

[MorphTargets | Babylon.js Playground (babylonjs.com)]

(https://playground.babylonjs.com/#0KSUE7#2)

Guess would be easier to communicate. Note that only manually refreshing the bounding info updates the bounding box.

Also is there no way to iterate through all targets? I saw _targets are a private variable. Can you add a getter for the same.
My use case requires me to iterate through the same. Serializing the MTM does not solves the problem.

I mean I am able to get numTargets do an iteration in that range, then use getTarget from the morphTargetManager class to get each target but seems like direct access to the entire array would have been much easier. Is there any other way I am missing in this?

Thanks.

No, there is no such function. Itā€™s a bit too specific, I think (we never had a request for that!). Also, itā€™s only a few lines of code:

scene.meshes.forEach((m) => {
        if (m.morphTargetManager || m.skeleton) {
            m.setVerticesData("position", m.getPositionData(true, true));
            m.setVerticesData("normal", m.getNormalsData(true, true));
            m.morphTargetManager = null;
            m.skeleton = null;
        }
});

See the doc for skeletal animations:

In the glTF case, the bounding box comes from the file data and is not recomputed when setVerticesData is called. You can either:

  • set alwaysComputeBoundingBox = true on the glTF loader
  • set mesh.geometry.useBoundingInfoFromGeometry = false for your meshes (before calling setVerticesData)
  • simply call refreshBoundingInfo()

No, thatā€™s the best way to proceed. We donā€™t want to export the inner workings of the classes. This allows us to evolve the code without breaking the public API.

No, there is no such function. Itā€™s a bit too specific, I think (we never had a request for that!). Also, itā€™s only a few lines of code:

scene.meshes.forEach((m) => {
        if (m.morphTargetManager || m.skeleton) {
            m.setVerticesData("position", m.getPositionData(true, true));
            m.setVerticesData("normal", m.getNormalsData(true, true));
            m.morphTargetManager = null;
            m.skeleton = null;
        }
});

Following the above approach I tried to do the same after applying morphs. The shape got distorted. Attaching the playground for the same.

morph playground #2 | Babylon.js Playground (babylonjs.com)

As you can see with a 2 second of setTimeout donā€™t know how but the mesh shape got distorted back again.

CORRECTION : If I disable the line mesh.morphTargetManager = null, the shape pertains. What are your thoughts over this strange behaviour? I baked the mesh with the vertices, still I cannot set morphTargetManager to null.

Thereā€™s a bug when baking morph data if there are more than one non-zero influences.

This PR will fix the problem:

2 Likes

Alright. Thanks for the same. Glad to contribute to the same.
I will confirm in the next release if that fixes up my problem, which I am pretty sure.

Hey, can you show me how to add shadows to the ground with the loaded model via loadAssetContainer. I am unable to figure out the same.

This page should help you:

Let us know if you need additional information.

Did went through it, but challenges :frowning:

  1. Unable to add light if I have a light from HDRI image.
  2. Even if I use normal light, I am unable to cast shadows from the GLB model loaded via LoadAssetContainer loader.

Would you have a PG that fails, that we could use as a basis?

morph playground #2 | Babylon.js Playground (babylonjs.com)

You can do it like this:

Is it necessary to add a Directional light to cast a shadow!? Will the HDRi lighting not work for the same.
Also,

shadowGenerator.addShadowCaster(scene.getMeshByName(ā€œrootā€), true);

Why accessing root here? Can I not do the following !?

container.meshes.forEach((mesh) => {
shadowGenerator.addShadowCaster(mesh, true);
})

No, environment light wonā€™t generate shadows. You must either use a point, directional, or spot light for that.

You can also do it this way, but it was less code the way I did it :slight_smile:

1 Like

How can I work with Draco-compression compressed files in the SceneLoader.LoadAssetContainer method?

This does not look related but you should rather rely on glb/gltf for this as they have native support ?