Backedvertex animation from external source

Hi,

I am reading this bjs source and serializing of backed textures.

I want to inject premade texture, not using BJS to set the texture up.
I need to confirm the format BJS expects.

The first links says

The array of matrix transforms for each vertex (columns) and frame (rows), as a Float32Array.

So, If I understand properly, if my mesh is made of 1000 vertices, and since we deal with matrices, I should have:

//C++
const uint32_t num_floats_per_matrix = 16;
const uint32_t vertices_per_mesh = 1000; // can be anything, depending on the mesh, ofc

const uint32_t row_size = (num_floats_per_matrix * sizeof(float)) * vertices_per_mesh;

// and the size to allocate for the contiguous data, for, say, an animation of 3 frames:
const uint32_t buffer_size_to_allocate =  3 * row_size;

And if I have multiple animations I will switch the texture input from the BakedVertexAnimationManager at runtime as necessary. (Different buffers, one per animation).

Do we agree on the calculation?
Do I miss something that will catch me later with the manager?
The plan is to avoid serializing and sending base64 data. Too large.

Thanks!

cc @Evgeni_Popov

1 Like

From the source code I read, I should be good,
Nullable should be able to deal with the float32 incoming data from the createRawTexture function. :innocent:

That’s it, except that the doc is a bit misleading, it is not vertex but bone: we store a matrix per bone, not per vertex (that would be way too much!):

const textureSize = (boneCount + 1) * 4 * 4 * frameCount;

Okay!
So I will adapt my code. It seemed high to me as well. So, for models without initial bones, like a reactor, I will need to add a bone and rotate it to read the bone movements. Am I correct?

Thanks Evgeni :+1:

Yes, baked animations only work for rigged meshes (meshes with bones and skeletons).

However, if you have a mesh without a skeleton and only need to rotate it, why using baked animations? It’s a lot simpler to simply rotate it using mesh.rotate / mesh.rotation.

Actually, I really need to free the cpu as much as I can.
I am ready to add a bone if the GPU can handle it by itself. Wrong strategy?

I would first benchmark the solution without using the baked animations, as this is very easy to setup.

If you find that computing the world matrices is a bottleneck, then using baked animations could help.

1 Like

Will do Evgeni :wink:
Thanks

PG

Last question I guess: if the armature itself is animated and the childs bones are not, do we understand that armature is always matrix[0], animated or not?
That would give us:

armature0(root)matrix...child0_firstBoneMatrix...child0_secondBoneMatrix... childn_firstBoneMatrix.. childn_secondBoneMatrix...
(didn’t investigate enough to know how you manage multiple childs bones, like pelvis to each legs, same root multiple childs).
In this order for each row?

And in case of multiple armatures (oh well…), we would keep going with armature1(root)matrix and repeat the structure above?

I basically made a console.log of the vertexData at the PG at the top, and read the “_texture” object.
I can clearly see the flat matrices structure. Just need the last logic bit.
From the source code I should be right because you read all the matrices from root to tail.
:thinking: :face_with_monocle:

Not sure what the armature is, but the list of bones is a flat list and not a list by child: you get this list through skeleton.bones. So, the texture is created by dumping skeleton.bones matrices for frame #0, then for frame #1, and so on.

1 Like

Nice, I will study carefully the skeleton then to make no mistake.

Thanks a lot Evgeni, you and Sebavan are saviors :cowboy_hat_face: