I’ve tried working with the Baked Texture Animations feature in 5.0, and there doesn’t seem to be a way to bake Skeleton animations with Animation Groups (coming for example in a GLB file from Blender).
There seems to be some discussions around the topic, and all the examples on the docs and in the playgrounds use the babylon files to load vertex animations, but don’t specify what shape the animations need to have in those files (I’m a noob on the subject as well ).
How would one go about creating such files? Any ideas about a working pipeline that could do something like: Skinned Mesh + (Mixamo) Animations (+ Blender) → import into the Babylon scene → Win!?
Gave that a try, and was pleasantly surprised that it baked the animations in the same way we need them for the Animation Texture (on the Skeleton).
For now this would be sufficient since it lets me use animations on many models (instances) - but I think the topic is still worth exploring - whether there would be a way to make the Baked Texture animations work with Animation Groups
I could never understand animation groups well enough to implement this, though I’d love too and actually was considering trying it again this week. There was a thread about this problem before: How to find the total number of frames of a skeleton animation?
From what I remember at the time, it seems that it’d require some manipulation of the RuntimeAnimation class to be able to set a specific frame and capture the transformations. I was a bit busy back then and I couldn’t work it out how that would work. I’ll try to look at it this week again (no promises!), and of course help is always appreciated.
Besides the basic problems of manipulating the AG directly to bake things, I remember there were two other minor issues to handle everything one would need. All this comes from what my memory is of the analysis I did back then, so this might all be way off.
First one was that applying VATs might clash with animation groups, but I think this was easy to handle by not using the AGs and VATs at the same time.
The second issue is that the VAT implementation does not handle composition of animations IIRC. So if you want to play and stop multiple groups it won’t work. My memory is fuzzy about this, but I think it’d be possible to use multiple textures (but not arbitrarily many), or do a small patch to the VAT so that it’d compose transformations if you had more than one applied (but I think it was not trivial since the order of transformations matters).
This is interesting and I didn’t know it. Do you mean that it exports textures that can be used with VAT, or that the BJS file has pre-baked animations stored in some form? I’m guessing the later but the docs mention the animation will be kind of baked during the export (hahaha, so very technical). I’d love to learn a bit more about this and if this baking enables 100% GPU animations.
Thank you for sharing the previous thread and your insights, really helps shed a light on the complexities of animations.
Regarding the BabylonJS Blender Exporter, my scenario is as follows:
I got an asset with an Armature/Skeleton structure and multiple meshes attached to it (one Mesh would be just fine but that’s how the asset I use is made, and it’s quite useful as well)
inside Blender, I’ve defined just one Action (Animation) in the Dope Sheet, and added all the Animations Keyframes sequentially into that one Action
exporting to a .babylon file, the Exporer actually creates the Animation on the Skeleton as a sequence of Keyframes
I have to manually define the ranges in code, since the getAnimationRanges() on the Skeleton returns the wrong values for some reason, so I cannot use them for the baking process. Since we only have one Blender Action, we will only have one complete AnimationRange anyways, so the ranges array has to be done manually in code.
the ranges array in the code have to match the Keyframe values in Blender, one range for each Animation (from/to), and the Animations bake & play correctly in the browser
In a nutshell having just one Blender Action with all animation keyframes sequentially layed out does seem to make the VAT work. In code, we have to specify the ranges array manually since the getAnimationRanges() on the skeleton will return one full range.
I’ve also tried many scenarios regarding multiple Actions in Blender (one for each Animation), as well as NLA tracks, but none seem to work correctly with getAnimationRanges(). The automated ranges seem to be off quite a bit, not sure why.
All of the above are for the .babylon asset format. Regarding the GLTF format, it will create those AnimationGroups instead of direct animations on the Skeleton, which don’t work for VAT as you mentioned in the previous thread.
I tried to calculate VAT for animation groups. It should works with known limitations like only one anim at the same time, no blending etc.
I have an issue with blinking animation at the start of animation. I think it is something with starting offset of animation, or rounding numbers. Will appreciate any helps)
Another idea is to be independent of frame rate to render anim. Right now under the hood, it renders via Animation which calls once per frame. For many frames(>500) with 30fps the process too long. I know that we can PRE bake it into json, but maybe it is good option to configure how to bake.
Hi @Evgeni_Popov, I was looking to integrate vat animations into my project but was wondering what are the benefits?
With a standard async import of 100+ entities, I get similar fps to your playground example? And that’s with 0 optimization, I’m sure I could improve that number. Hopefully, my question isnt too silly
You should check the performance via Inspector->Statistics->Frame steps duration. Maybe you don’t see the difference because you’r not bound to CPU on your device. Try it on a mobile.
On your screenshot, you have 10FPS. Do you check what cause it?
Any idea why the weapon is not fully attaching to each instance? It’s initial position looks good, but the weapon does not follow the animation.
IF I move the player instance, the weapon does follow the player (but not the animation), it just stays in the same position as the player.
const weapon = weaponMeshMerged.createInstance("player_" + id + "_sword");
let bone = playerInstance.skeleton.bones[37];
weapon.attachToBone(bone, playerInstance);
“Render targets” time is the time spent rendering all RenderTargetTexture textures. It takes into account the rendering of shadow maps and effects layers, among others.
Are you able to provide a small repro demonstrating the problem with the weapon?