Are complex mesh animations like facial/hand animations just vertex animations?

The question here talks about animating vertices and shows this playground example for how to do it.

Provided I have all the vertex coordinates for a face topology, would this be the correct way to dynamically animate a face? I’m not going for fancy, I’m thinking something like Andross from SNES Starfox.

Or is there another preferred method?

Thanks so much for your fine work, everyone!

Hey @justjay. Welcome to the Babylon Family! Great to have you here!

With most things in game/render engines, there’s really no ONE way to do anything.

That said it kind of depends on the end goal you’re going after. When it comes to animating faces, most people tend to use morph targets. If you’re not familiar with them, think of it as a pre-determined ‘shape’ to a face. So perhaps you have a single mesh to represent the face, but you store different “expressions” as morph targets. So you might have a neutral pose, with a morph target for angry, or happy, or sad. Then you can animate the ‘influence’ of each morph target. So you could also be half angry, and half sad for example.

This technique also works for basic mouth shapes for talking characters. Something perhaps like this:

You’d have a morph target for each of those mouth shapes, then you can animate 1 single property to go between them.

Again, there’s no one way to do anything, but for faces, most folks tend to go that direction.

P.S - Babylon.js fully supports Morph Targets…and as of just a couple weeks ago, it now supports unlimited simultaneous morph targets!

4 Likes

Holy smokes! Great infographic. I’ll check out morph targets. I don’t suppose it’s possible to dynamically create morph targets, is it? I’m trying to play to my strengths (as my coding is better than my asset creation / artistic abilities), and I have some adaptive routines I’m working on that hopefully correspond to facial movements and hand gestures. So I’m hoping to just move around a bunch of blocks or particles until get better at asset work.

Edit: removed example b/c I’m shy about incomplete work :sweat_smile:

Also on a personal note @PirateJC you’re the man. Thanks for the awesome content always keeping me motivated to keep trying!

1 Like

Super kind words! Thanks so much!

Good question about generating mouth shapes dynamically. It’s over my own head, but I bet @syntheticmagus would be super interested to weigh in on this!

1 Like

Oh and just for fun.

Here’s another cool image that shows how many basic mouth shapes there were for one of the characters in Cloudy With A Chance Of Meatballs 2:

3 Likes

Wow! I don’t even think my face has that many mouth shapes! Thanks for sharing. Have much to learn.

1 Like

(Forum etiquette question: I have some more thoughts on the topic – if this were a Slack channel I’d probably keep discussing. Am I supposed to make a new question or keep going here? For the sake of time I think I’ll keep going but it I’m supposed to do something else let me know)

Yesterday’s discussion had me thinking about games like the Morrowind games and Mass Effect (Andromeda doesn’t count) which feature the now-common “character creation” (face) editor. Do you suppose to power the facial expressions for those, the in-game editor created custom morph targets or do you think maybe they just applied shaders to a stock set of morph targets, or perhaps some other technique?

There are 2 possibilities each for both hands & face / speech. For expressions / speech:

  • The use of a single bone to open / close the mouth. This has the advantage of being exponentially simpler. You might change the amount / degree by viseme to enhance, but animating one bone’s rotation is possible to do one the fly. You need to make a good weight mapping for bones to work, but unless you are going to “free sculpt” each morph target manually, you would need that to create each of your targets using a bone.

  • Using morph targets is much more complicated / kind of a rabbit hole. In 2017, I made a morph based first attempt, QI.Automaton, using my animation system. I only create 24 simple morph targets, and make composite targets. BJS itself can now make that many base targets, though I have never seen if there is a limit of how many can be used at one time or not.

Regardless of method, sync is going to be an issue. Doing simple stuff by hand is fine unless you want to graduate to kind of a text to generated speech / animation. I turned the data from Carnegie Mellon University’s ARPABET project (120k words) into a database. While it had all the phonemes (I built a phoneme to viseme translation table), depending on where in a syllable a phoneme is or the stress of the syllable in the word, it may not need a visme. Unfortunately the CMU data does not breakout by syllable, so I wrote an editor & edited 44k of the words I decided to keep, by adding a syllable breakout for each pronunciation.

Even this did not actually solve the sync problem, but I have also developed my first generation voice font system. When you are generating BOTH, sync is automatic, but it is a major undertaking.

I will be starting on my next generation of the font system in a week or so, but this is a big time investment, so think hard before you do not just using a single bone.


For hands, the same 2 ways, bones or morph, apply:

  • Much more restrictive for moprhing though than face. Composite targeting is not possible, since each finger affects some verts in between & ruins it. You end up with spiked verts between fingers. Ok, if you have a single target for each gesture, like here Finger Shapekeys, but you cannot use this to build composites to generate infinite poses.
  • You are going to need 25 bones per hands, but you can do compositing by finger. Took me months, & also not for the faint of heart.
2 Likes

Just because I was pinged here’s something I saw—and I definitely don’t recommend going down this road if you’re actually trying to achieve something short-term—which might be of interest if you’re just looking for cool techniques. :smile: Did you know that it’s technically possible to do DIY facial mocap using only free tools you probably already have?

Blender 2.8 Facial motion capture tutorial - YouTube

Again, probably not practical for anything utilitarian. But pretty cool!

1 Like

Wow @JCPalmer looks like you’ve really been in the trenches with this! @syntheticmagus you aren’t kidding! Fortunately I’m a champion yak shaver.

So my questions are probably odd because I don’t come from any sort of a gamedev background, just a meager web developer. But my observation is that the standard approach to animation is typically to develop the animations offline in something like blender and have them available at compile-time, and then at runtime import and combine the animations (perhaps by attaching them to meshes, skeletons, or behaviors) and manipulate the the weights and booleans between the various imported animations to achieve dynamic behavior? Is that accurate?

If so, my questions are probably not making much sense and I might be on my own. :sweat_smile: There are several objectives which would make it really nice to be able to generate animations at runtime. For instance, physics-based animations – think evolutionary algorithm with things learning to walk, etc, but NOT pre-rendered.

If you’ll humor me, let’s just hand-wave over the difficulty of that calculation and the complexity of the associated bookkeeping and assume that I have a black box function currentPositions(scene, entity) that could be used with onRenderObservable.add() or something similar that would provide me with all the (x,y,z) coordinates of all the major facial landmarks, hand landmarks, skeleton/joint landmarks, etc and that they were all somehow consistent.

Is it possible to use any of the existing systems (skeletons, animations, etc) to manipulate the meshes dynamically/at-runtime? In my case, if the only drawback is implementation complexity, I’m fine with that tradeoff. Thanks to the brilliant webworker canvas technique introduced in 4.2, I’m not at this point overly concerned with performance (continually impressed by how crazy performant BJS is).

Where I’m running into issues is this – let’s say that I have a preexisting skeleton. The [(x, y, x), …] tuples might not match up with the joints of the skeleton. Therefore, I’d rather create the skeleton at runtime from the [(x, y, x), …] points (otherwise I’d run into a consistency issue which would probably result in some horribly deformed animations).

After all that rambling, I’ll try to tighten up my question:
I take @JCPalmer’s warning seriously that the morph targets are a rabbit hole – especially because I don’t even know how to use blender. But I’m okay at generating vectors of <x,y,z> coordinates.

So:

Is it possible to take a stock asset, like the dude, and dynamically animate him at runtime frame by frame using something like onRenderObservable.add() with a set of [(x, y, z), …] coordinates that corresponded to things like location of knees, ankles, hands, arms, elbows, etc?

Thanks again for entertaining my rambling question, as I don’t have a traditional game dev background and I’m struggling to find the right words and concepts.

Okay, I see DK’s video here on skeletons and it looks like maybe I just need to read up a little more on how they work. Sorry for causing confusion. Thank you @JCPalmer , @syntheticmagus , and @PirateJC for pointing me in the right direction!

Funny, I’ve been watching some tv series starring actors Ken Stott and Peter Davidson. Interesting to watch their mouth movement and those mouth shapes. Don’t see those O, U and L shapes very often unless they shout.

Almost like I’m watching examples of that “British stiff upper lip” :slight_smile:

Stay Safe, gryff :slight_smile:

Nailed it.

:sweat_smile: :scream: :cold_sweat: :fearful: :sob: :confounded: :rofl:

4 Likes

Yes, that is correct. Normally you would create a few dozen animations in Blender (or Maya / 3DS Max / etc) and then morph between the animations at runtime:

Babylon supports bone animation blending:

It’s also possible to use physics with bones. In that case you will still create the skeleton and model in Blender, export it as a GLTF, import it into Babylon, and then attach the physics imposters to the bones in Babylon:

https://doc.babylonjs.com/divingDeeper/physics/usingPhysicsEngine

Babylon also supports morph targets (which are called shape keys in Blender):

The combination of bone blending + bone physics + morph targets gives enough dynamicism for 99.9% of use cases, and it is the standard that is used in basically every game.

I don’t recommend creating morph targets/skeletons dynamically at runtime, because it’s really complicated. In particular, skeletons require weight painting, which is very hard to do automatically.

That’s why skeletons are manually created in another program (like Blender). If you already have a skeleton created (and weight painted), then it’s fairly easy to dynamically move the bones at runtime, but you are limited to the bone structure.

So if you really want to generate everything dynamically at runtime, your best option is to create/modify the individual vertexes of a mesh, like this: