Suggestions for mapping points to skeleton?

Im using BoneLookController to map laoded skeleton to a set of provided points (Vector3) and it works fine once I figured out nuances of boneLookController

This is more of a general ask for advice, not an issue with BabylonJS

To correctly map input points to skeleton bone look controller targets, input points need to be scaled (x/y/z) - but since pose can be anything (e.g., point coordinate range is not the same if person is standing up with arms streched vs sitting down with arms next to body), there is no clear reference how to scale them to match skeleton

For any given input, I can set x/y/z scale after manually inspecting scene, but no good idea on how to do that automatically - any ideas?

@bghgary do you have any idea ?

No, I don’t know much about BoneLookController. Maybe a PG illustrating the scenario would help?

1 Like

Its really too much for PG as it needs all keypoints and fully rigged skeleton to illustrate.
I can post some screenshots here for illustration?

Can you load a babylon file or a gltf file with the skeleton in it?

It probably won’t hurt. :slight_smile:

loading skeleton is not an issue, but to demonstrate the problem i’d need to fully rig it with all bonecontrollers and load keypoints data - and do that for couple of different cases to illustrate how scaling impacts different poses.

i’ll do some screenshots tomorrow…

1 Like

I’m not sure if this is what you are looking for, but have you tried using bone.getLocalFromAbsolute:

or bone.getAbsoluteFromLocal:

1 Like

@adam i don’t think that would help

my core issue is in order for bonelookcontroller to work properly, target has to be set to a point after bone-end (it should definitely be on the same vector).

but if my target points are not scaled properly, resulting skeleton pose will be incorrect.
for example, hand may be twisted inwards instead of outwards since target keypoint for palm is too close to wrist.

sounds trivial? the problem is that for different sets of input keypoints there seems to be a totally different ideal scale that should be applied. sometimes its scale.x by 2x, sometimes its scale.y by 2x, etc.

and i have no idea what’s ideal scale until i actually do the mapping and see how it looks on the screen. question is for ideas on what to base scale decision on?

I guess I dont understand the issue.

I thought you wanted to transform points from world space to local space or vice versa.

A playground would be helpful.

consider the question as closed (as there is no option to mark it as such).

i can’t do a full playground illustration and without it, it doesn’t make sense to continue - thanks for trying!

fyi, i’ve implemented a “close-enough” algorithm that scales vectors of bone lookcontroller targets by comparing two prominently visible points (e.g. left and right knee) and comparing
a) their bone position vector difference to
b) anchor points vector difference.

and using the ratio of the two to re-scale bone lookcontroller targets