Machine Learning Detector + Babylon JS

**Controlling a 3D model bones using Machine Learning Detector **

i am trying to control a 3D model from the input of pose detection using video or webcam. SO far I made it this far. I can detect poses and map them to bone. But mapping is a hard problem. I tried everything but seems like bones are not moving or if they are moving they dont update the skinned mesh.

I appreciate if I can get some help.

1 Like

I was able to achive it with babylon dude model but I am trying to use Mixamo model mapping and it does not work

1 Like

I guess this can be a great use case for Motion retargetting. cc @Evgeni_Popov

If you use a mixamo model, you probably load it from a glTF file. In that case, you must change the transformations of the transform node linked to a bone, not the transformations of the bone itself. You can get the linked transform node by doing bone.getTransformNode().

Thanks for the tip. This is how it looks like now, it is a bit funny :smiley:

1 Like

Very cool demo. Might I ask what pose detection you are using since it seems to run fully in the browser. And in your demo it doesn’t seem to have a z-value for depth, or are you just mapping only two axis? About the animation stuff, yeah bones can be tricky to map correctly - not gotten far myself with that, still fiddling with plain animations and trying to figure out smoothing between them.