**Controlling a 3D model bones using Machine Learning Detector **
i am trying to control a 3D model from the input of pose detection using video or webcam. SO far I made it this far. I can detect poses and map them to bone. But mapping is a hard problem. I tried everything but seems like bones are not moving or if they are moving they dont update the skinned mesh.
If you use a mixamo model, you probably load it from a glTF file. In that case, you must change the transformations of the transform node linked to a bone, not the transformations of the bone itself. You can get the linked transform node by doing bone.getTransformNode().
Very cool demo. Might I ask what pose detection you are using since it seems to run fully in the browser. And in your demo it doesn’t seem to have a z-value for depth, or are you just mapping only two axis? About the animation stuff, yeah bones can be tricky to map correctly - not gotten far myself with that, still fiddling with plain animations and trying to figure out smoothing between them.