So me and one of the guys at Pixcap has been working on a method to merge imported duplicate skeletons into a single skeleton.
We are able to create clean skeletons from most imported assets that have duplicated skeletons for some reason when coming in (see this topic BJS Imports some GLBS with Multiple Duplicate Skeletons, while Three jsEditor and Blender Sees One?)
Now as we are trying to merge the VertexBuffer data on the meshes we are running into some problems.
Its really close but maybe there is something we are not considering or there is a logic flaw, but regardless we were really hoping someone who might understand what we are trying to accomplish could possibly see whats going wrong with this setup.
Its really really close on things like the alien.glb and even gets one of the skeletons completely correct in the gym-beasts. Other then that though its pretty much spaghetti monsters.
To change if the parsing happens just toggle the boolean on line 29 and to change the asset just change the number right there as well.
The skeletonViewers will then take turns displaying after the asset loads to help see whats going on.
Also on a little note there are lots of loops on loops on loops for this, so its kinda scary… kudos to anyone who figures this out, ill for sure send you some tokens of appreciation.
@mrlooi, will want to follow the progress here.
Wow, that is a monster PG. Mind if I say what I would try? If there is more than one skeleton, but they are identical right?, then delete all but the first encountered, then just go through all the meshes & assign the skeleton to the one kept.
Am I missing something?
Basically, what we are doing is identifying all instances of the same bone. This is established by looking at the skeletons linkedTransformNodes and testing if they are the same. If they are we can assume its the same skeleton?
From that we create figure out if that bone needs to be on a unique skeleton if so create it then pass all the bone data that needs to be on that skeleton to it from the old bone data. Then we link the new skeletons bones to the appropriate transform nodes.
After that we take a look at which meshes are being effected by the new skeletons. If the mesh had a skeleton that was split into multiple ones we now have to update the VertexWeights and Indices to match the new skeletons indices.
Actually while talking through this I think, I might have figured out what I was doing wrong. Might not need to even update the weights, just iterate though the indices data and replace the correct index.
With Mapping Unification
Really close but about the same results I was having. There must be a step we are missing.
Ive found some errors in our logic already so maybe its something dumb.
Ok this is super close. If you go into the weights debug shader for the mesh they all seem to be there are are correct now. But still spaghetti monster, except for the legs… which is showing promise!
Starting to think some of this might be transform node related.
Ok it looks like my indices are still getting mapped wrong because the right leg and left arm both have the same index on a bone mapping which is not correct. Getting closer on this.
Pretty close now, Just need someone else who knows whats going on more to take a look.
Ive got it working on the alien and the gym beasts.
For some reason though the girl worker has been giving me the run around.
I have tried everything I can think of and now Im just plum out of ideas… Kinda thinking its something to do with the matrices as it seems the indices and weights are all correct but the poseMatrix or something for the old bones vs the new one is different perhaps but only for certain old skeletons?
Been really struggling with this and could use ANY SORT OF INPUT!
Sorry for the monster pg. But the main thing to read through is ParseGLTFContainerSkeletons starting at line 88.
Anyone who helps me solve this I will owe you a great deal.
Ok so what does it mean when a meshes position looks like this:
when the skeletons and animations are turned off.
This is all prior to any skeleton consolidation, its the original data.
And then when I turn the skeletons on
The head and hands go to the right place.
So which matrices are responsible for this? I am pretty sure I see nothing but identity pose matrices and dont know which one to look at now to make sure that I have taken that into consideration.
Now once I run the skeleton merge function and dont turn on the animations the results are this:
Which leads me to think that I am really really close. Just need to figure out what matrix is different with the head and hands meshes.
First person to help me figure this out I will happily pay you some, if that helps motivate some people.
Looked shortly at this yesterday…
At first sight it seems a buggy base mesh, the meshes are positioned (or baked) at bad positions, then the bug was fixed in the animations instead of the actual mesh data…
I’ll give it a go and look at the actual data, see if i can find where it happens
yeah if you can help me with this I will owe you big time!
I know its a real fringe model but that’s major reason we are attempting to do this. And its sooo soo soo close.
Honestly, atm i can’t imagine any solution that will be practical.
Maybe we can make this mesh work by hard-coding the specific fixes needed, but the next buggy mesh will be different, i don’t know
Skinning needs to be removed, mesh fixed/baked correctly, reskinned.
try running this in your console,
you can see again that the face mesh (and other offset meshes will be same) as in your own screenshot, is in the wrong position compared to pants
let face = scene.getMeshByName('face');
face.skeleton = null;
let pants = scene.getMeshByName('clothingSet_04_pants');
pants.skeleton = null;
Looking at the position vertices data,
First index in Face is baked at around 2 (x)
First index in Pants is baked at around around 0 (x)
Maybe you can detect if there’s a large difference in positions in the bones and then avoid merging those?
i’m no expert in bones or animations,
But I had a shower thought of my own on this
I don’t know if it’s possible.
If we can read rest pose data from bones/skeleton,
then compare it to vertices positions data,
I think a “normal working” mesh should compare to 0 +/- float point precision,
So if there is a difference, we can subtract/add it to vertices positions as needed
That’s kinda what I was thinking, but I was worried it would not be so simple if the matrix rotated the vertices.
but the matrix shouldn’t have any effect on the internal position data,
(unless it gets baked),
it’s kind of like a mesh parent, it rotates, scales & moves the mesh in world space, but local/internal position data is not changed directly
( mesh.getVerticesData(‘position’) )
I’ll give it a shot, that means I’d have to update the animations values as well though huh?