Regarding rigs/bones:
My thinking has been that it’s too complex for an average person to manipulate bones/rigs. So, I’ve been approaching it as a matter of sequencing together and blending pre-recorded keyframe clips. Of particular interest to me is to see if it’s possible to use Webcam for rudimentary MoCap, which could be used to allow an average person to sequence animations naturally. I think there’s some good promise here. See: https://twitter.com/benrigby/status/1182768926171783169.
I thought I had a really great idea a couple of months ago, but it didn’t pan out (yet). My thought was to download the keyframe data from Mixamo animations and to save it in an animation library. eg: idle.animdata.json; jump.animdata.json. And then to apply that animation data file to any model that used a Mixamo rig (since all the rigs are the same). The effect would be that you could use Mixamo/Fuse to make 1000s of characters - and the animation could then be added with a single line of code.
It didn’t quite work out. See:
But parts of the mesh are actually animating correctly! So, maybe some promise here. More work needed.