Wow @JCPalmer looks like you’ve really been in the trenches with this! @syntheticmagus you aren’t kidding! Fortunately I’m a champion yak shaver.
So my questions are probably odd because I don’t come from any sort of a gamedev background, just a meager web developer. But my observation is that the standard approach to animation is typically to develop the animations offline in something like blender and have them available at compile-time, and then at runtime import and combine the animations (perhaps by attaching them to meshes, skeletons, or behaviors) and manipulate the the weights and booleans between the various imported animations to achieve dynamic behavior? Is that accurate?
If so, my questions are probably not making much sense and I might be on my own.
There are several objectives which would make it really nice to be able to generate animations at runtime. For instance, physics-based animations – think evolutionary algorithm with things learning to walk, etc, but NOT pre-rendered.
If you’ll humor me, let’s just hand-wave over the difficulty of that calculation and the complexity of the associated bookkeeping and assume that I have a black box function currentPositions(scene, entity)
that could be used with onRenderObservable.add()
or something similar that would provide me with all the (x,y,z) coordinates of all the major facial landmarks, hand landmarks, skeleton/joint landmarks, etc and that they were all somehow consistent.
Is it possible to use any of the existing systems (skeletons, animations, etc) to manipulate the meshes dynamically/at-runtime? In my case, if the only drawback is implementation complexity, I’m fine with that tradeoff. Thanks to the brilliant webworker canvas technique introduced in 4.2, I’m not at this point overly concerned with performance (continually impressed by how crazy performant BJS is).
Where I’m running into issues is this – let’s say that I have a preexisting skeleton. The [(x, y, x), …] tuples might not match up with the joints of the skeleton. Therefore, I’d rather create the skeleton at runtime from the [(x, y, x), …] points (otherwise I’d run into a consistency issue which would probably result in some horribly deformed animations).
After all that rambling, I’ll try to tighten up my question:
I take @JCPalmer’s warning seriously that the morph targets are a rabbit hole – especially because I don’t even know how to use blender. But I’m okay at generating vectors of <x,y,z> coordinates.
So:
Is it possible to take a stock asset, like the dude, and dynamically animate him at runtime frame by frame using something like onRenderObservable.add()
with a set of [(x, y, z), …] coordinates that corresponded to things like location of knees, ankles, hands, arms, elbows, etc?
Thanks again for entertaining my rambling question, as I don’t have a traditional game dev background and I’m struggling to find the right words and concepts.