Baked Texture Animations with Animation Groups?


I’ve tried working with the Baked Texture Animations feature in 5.0, and there doesn’t seem to be a way to bake Skeleton animations with Animation Groups (coming for example in a GLB file from Blender).

There seems to be some discussions around the topic, and all the examples on the docs and in the playgrounds use the babylon files to load vertex animations, but don’t specify what shape the animations need to have in those files (I’m a noob on the subject as well :frowning: ).

How would one go about creating such files? Any ideas about a working pipeline that could do something like: Skinned Mesh + (Mixamo) Animations (+ Blender) → import into the Babylon scene → Win!?

I see in this discussion, Vertex Animation Texture module implementation that the initial (beta) approach was supposed to use Animation Groups, but the idea got dropped.

Any help is appreciated, thank you.

Having worked a bit more on this, it proved, once again that my knowledge was the limit :smiley:

Looking into integrating Blender and Babylon I found the exporter, Blender to Babylon.js exporter | Babylon.js Documentation.

Gave that a try, and was pleasantly surprised that it baked the animations in the same way we need them for the Animation Texture (on the Skeleton).

For now this would be sufficient since it lets me use animations on many models (instances) - but I think the topic is still worth exploring - whether there would be a way to make the Baked Texture animations work with Animation Groups :slight_smile:

Till then, learning something new every day…


Adding @brunobg the brilliant mind behind the animation texture.


I could never understand animation groups well enough to implement this, though I’d love too and actually was considering trying it again this week. There was a thread about this problem before: How to find the total number of frames of a skeleton animation?

From what I remember at the time, it seems that it’d require some manipulation of the RuntimeAnimation class to be able to set a specific frame and capture the transformations. I was a bit busy back then and I couldn’t work it out how that would work. I’ll try to look at it this week again (no promises!), and of course help is always appreciated.

Besides the basic problems of manipulating the AG directly to bake things, I remember there were two other minor issues to handle everything one would need. All this comes from what my memory is of the analysis I did back then, so this might all be way off.

First one was that applying VATs might clash with animation groups, but I think this was easy to handle by not using the AGs and VATs at the same time.

The second issue is that the VAT implementation does not handle composition of animations IIRC. So if you want to play and stop multiple groups it won’t work. My memory is fuzzy about this, but I think it’d be possible to use multiple textures (but not arbitrarily many), or do a small patch to the VAT so that it’d compose transformations if you had more than one applied (but I think it was not trivial since the order of transformations matters).

This is interesting and I didn’t know it. Do you mean that it exports textures that can be used with VAT, or that the BJS file has pre-baked animations stored in some form? I’m guessing the later but the docs mention the animation will be kind of baked during the export (hahaha, so very technical). I’d love to learn a bit more about this and if this baking enables 100% GPU animations.

Thank you for sharing the previous thread and your insights, really helps shed a light on the complexities of animations.

Regarding the BabylonJS Blender Exporter, my scenario is as follows:

  • I got an asset with an Armature/Skeleton structure and multiple meshes attached to it (one Mesh would be just fine but that’s how the asset I use is made, and it’s quite useful as well)
  • inside Blender, I’ve defined just one Action (Animation) in the Dope Sheet, and added all the Animations Keyframes sequentially into that one Action
  • exporting to a .babylon file, the Exporer actually creates the Animation on the Skeleton as a sequence of Keyframes
  • I have to manually define the ranges in code, since the getAnimationRanges() on the Skeleton returns the wrong values for some reason, so I cannot use them for the baking process. Since we only have one Blender Action, we will only have one complete AnimationRange anyways, so the ranges array has to be done manually in code.
  • the ranges array in the code have to match the Keyframe values in Blender, one range for each Animation (from/to), and the Animations bake & play correctly in the browser

In a nutshell having just one Blender Action with all animation keyframes sequentially layed out does seem to make the VAT work. In code, we have to specify the ranges array manually since the getAnimationRanges() on the skeleton will return one full range.

I’ve also tried many scenarios regarding multiple Actions in Blender (one for each Animation), as well as NLA tracks, but none seem to work correctly with getAnimationRanges(). The automated ranges seem to be off quite a bit, not sure why.

All of the above are for the .babylon asset format. Regarding the GLTF format, it will create those AnimationGroups instead of direct animations on the Skeleton, which don’t work for VAT as you mentioned in the previous thread.

1 Like

Yes, this is a problem I was having too at the time and that led to that other thread. I could never get the ranges properly with AGs either.

If you think your code would be helpful to improve the VAT system ping me and let’s get this into a PR.

1 Like

I tried to calculate VAT for animation groups. It should works with known limitations like only one anim at the same time, no blending etc.

I have an issue with blinking animation at the start of animation. I think it is something with starting offset of animation, or rounding numbers. Will appreciate any helps)

Another idea is to be independent of frame rate to render anim. Right now under the hood, it renders via Animation which calls once per frame. For many frames(>500) with 30fps the process too long. I know that we can PRE bake it into json, but maybe it is good option to configure how to bake.

@Deltakosh Maybe it can be integrated into Babylon.js/vertexAnimationBaker.ts at fdbf393d1d7699dc7cc69cec1dca0819ebd2622a · BabylonJS/Babylon.js · GitHub somehow?

100 instances with 5 animations:


@Evgeni_Popov might have some ideas ?

For some reason, the very first frame of the first animation group is generated two times. Skipping one fixes the problem:

That’s another thing we will have to figure out as part of the animation system revision for 7.0 (Animation Improvements · Issue #13534 · BabylonJS/Babylon.js · GitHub).


Hi @Evgeni_Popov, I was looking to integrate vat animations into my project but was wondering what are the benefits?

With a standard async import of 100+ entities, I get similar fps to your playground example? And that’s with 0 optimization, I’m sure I could improve that number. Hopefully, my question isnt too silly :slight_smile:

You should check the performance via Inspector->Statistics->Frame steps duration. Maybe you don’t see the difference because you’r not bound to CPU on your device. Try it on a mobile.

On your screenshot, you have 10FPS. Do you check what cause it?

1 Like

Ok that make sense,

Here is the frame steps, but I must admit, I not entirely sure what each line means.


So yeah, even for 30FPS you need 33ms per frame. So now you’r CPU and GPU bounded.

I think you can easily improve meshes selection(disable mesh selection for static objects like walls).

RenderTargets stat comes from UI if I’m not mistaken (in your case).

For GPU you need more info from COUNT tab. Cause it depends on many factors like draw calls, textures sizes etc.

I think you can easily improve meshes selection(disable mesh selection for static objects like walls).

Do you mean something like this:

m.checkCollisions = false;
m.isPickable = false;
m.receiveShadows = false;

I wonder why the render targets are so high, it’s probably the names above each entity. Anyway, just have to do some tests to pinpoint that.

In all cases, I will work on implementing VAT, once I found the right way to dynamically integrate it.

Ok once you start baking the animation, and doing a little optimization, VAT gives amazing results!

With 200 entities playing different animations… I get 30 fps on a low end computer.

Any idea why the weapon is not fully attaching to each instance? It’s initial position looks good, but the weapon does not follow the animation.
IF I move the player instance, the weapon does follow the player (but not the animation), it just stays in the same position as the player.

const weapon = weaponMeshMerged.createInstance("player_" + id + "_sword");
let bone = playerInstance.skeleton.bones[37];
weapon.attachToBone(bone, playerInstance);

I’m so close. Once I got this weapon issue sorted, I will implement vat + instances to my project.

(Apologies for spamming this post)

“Render targets” time is the time spent rendering all RenderTargetTexture textures. It takes into account the rendering of shadow maps and effects layers, among others.

Are you able to provide a small repro demonstrating the problem with the weapon?