I find it motivating to talk about a feature in progress… at least sometimes. This feature might be suited for this kind of devlog/progress show. Let’s try it out!
I’m working on adding pose Inertialization in babylonjs.
To sum it up quickly, during transition from motion A to B, instead of evaluating both animation and blending between them,
intertialization takes last pose of motion A with the evaluated animation B and based on bones velocity, computes a more natural blending.
So, transition is faster to evaluate and more respectful of physics.
This is particular interesting with motion matching. Animation clip switch happens frequently and even if the feet are kinda preserved during transition because of the nature of MM itself, discontinuities happen.
An awesome GDC presentation by David Bollo is available here : Inertialization: High-Performance Animation Transitions in Gears of War - YouTube
That said, I wanted to see the difference between linear blending and inertialization in small testbed project. And I also wanted to experiment it before doing its integration inside babylonjs.
The result! Linear on the left, no transition in the middle and intertialization on the right. Linear blending makes it look like a robot.
Also available on YT:
YT Motion inertialization
It more visible with a close up on the hand trails.
In linear mode, the steps (one per frame) are evenly distributed. That’s not how a normal body moves.
In Inertialization, the steps are smaller when reaching the end destination.
I have to check the velocity for the first frames. It looks like the movement is starting with a null velocity. I need to verify that with a faster arm motion.
Few things I (re)learnt:
- working with angles always leads to errors. Trying to lerp between angles not in the same range. calling acosf with a cosine of 1.0000001 returns NaN
- quaternion+translation is easy to work with and makes code readable. Sometimes, I had to switch to matrices because it still feels more natural to me
- Keep values in world coordinates the longer in the pipeline makes it easier to change it to a local coordinate system when needed. I know this can bring precision issues, especially as the space range of my mocap data is large.
- Take some formula as is and don’t try to understand the math behind. (Twist reduction I’m looking at you)
This post is also available on twitter as a thread : https://twitter.com/skaven_/status/1304042483240841216
And it will be more redacted/reviewed in a blog post and demo and november. Feel free to ask question here , give your inputs,…