Importing/parsing animations (from gltf) takes too long

Hi everyone,

Quick question: Many Mixamo animations (currently 70 AnimationGroups; 67 bones, 30fps anims maybe like 20 keyframes each) in a glb file per character skeleton. Loading times are quite long. The more animations, the longer the loading. Model itself is just 3k verts and no textures.

By loading times I mean the time between AssetManager.load and the task done callback. I am at 15 seconds for a single skeleton. Pure downloading is just 3 seconds. There is nothing unusual in DevTools (no big lags or gaps).

Is there anything I can do to speed this up (like caching something in a build step)? How do you guys handle this?

Best wishes
Joe

  1. Try to resample animations with GLTF-Transform CLI or online here - https://glb.babylonpress.org/ (check Resample Animations checkbox).
  2. Import animations in another GLB(s) separately from the main GLB, then retarget and assign to the model. In this way, for example, you may just one animation GLB (without any meshes) for all models.
    Let me know if you need examples.

Thanks @labris. Does your tool use “mesh optimizer” under the hood? Because that I already use. After using your tool, I could get the file size down but loading time seems to remain unchanged (within measurement error). :frowning:

Just to make sure: on glb.bayblonpress if I click on settings, it does auto-apply them / will be applied when downloading?

I already do point 2 :+1:

There is the checkbox for Meshoptimizer, it doesn’t applied by default. Be aware that successive compressions/decompressions may result to some losses.

New settings saved after change but applied after pressing ‘R’ (instant reload). In this case settings are applied to the downloadable file.
Could you share GLB somehow?

Oh ok. I tried both of my main rigs now and with more “checkboxes” on. But still, no changes in loading times.

Would you say the only leverage we have here is the keyframes? Either get the count of keyframres down or even get rid of some bones?

Or can we cache something maybe? Because the thing is, once I have loaded/ready the skeleton/animations, the cloning of the skeletons and animation groups, is super fast. So whatever happens during loading, can this be cached?

I could share a stripped down version. Would that help? Essentially there is two types of animations:

  • carbon copies right out of Mixamo (~90%)
  • edited Mixamo anims using their Rigging Tool (Blender)

So the biggest problem is the loading time?

It may help :slight_smile:

With resample animations - yes, for some Sketchfab models the difference can be huge.
All other settings do not applied to GLB animations (except pruning excessive nodes).

I have an idea, but need to experiment to prove the concept.

1 Like

It’s unlikely that you need all the 70 animation groups at start, right ?

My approach on such file (since you can do no use of any animation until all are loaded) would be to separate animations into different files on the server… That way you could :

  • Load first the one you need at start (and reduce blocking time)
  • Load the others in background, and progressively merge them to your animation group memory
2 Likes

Ok, thanks guys I see.

Yes, indeed. Strong correlation between number of animations and loading time (everything else being constant).

Gonna report back when I have ready the sharable version of the skeleton.

Hmm, you got me there. I even load animations for items (e.g. ShootWithRfile) that may not be in the level at all.

Ohh, you know what, the first thing I am going to try now is measuring how long it takes to load a single animation. If that is gonna be lag-free… :astonished:

Here is the Gangnam-style testing non-optimized PG with 44 animations - https://playground.babylonjs.com/#FL4YYC#20
Neither model nor animations are optimized (used as is).
Model and animations are from Ready Player, so they are compatible with Mixamo rig.
At my side the loading time is around 7 sec, and 1 sec when cached.
I didn’t have time (yet) to implement the background loading after the main animation set is loaded and playing.

1 Like

I’m sure it will !
Here is Python script if you want to save time on your export process :

import bpy
import os

# GLOBALS
heavy_glb = "/path/to/your/file.glb"
export_mesh = "first_only"
# export_mesh = "keep"
# export_mesh = "delete"


def empty():
    for obj in bpy.data.objects:
        obj.hide_set(False)
    bpy.ops.object.select_all(action='SELECT')
    bpy.ops.object.delete()

def reload():
    empty()
    bpy.ops.import_scene.gltf(filepath=heavy_glb)

if __name__=="__main__":
    # Get all names
    reload()
    action_names = [a.name for a in bpy.data.actions]

    # Loop on each
    for index, name in enumerate(action_names[:3]):
        # Reload and remove all but this one
        reload()
        for action in bpy.data.actions:
            if action.name != name:
                bpy.data.actions.remove(action)
        # Delete meshes
        mesh = ""
        if export_mesh == "delete" or (export_mesh == "first_only" and index):
            for obj in bpy.data.objects:
                obj.hide_set(False)
            bpy.ops.object.select_all(action='DESELECT')
            for obj in bpy.data.objects:
                if obj.type == 'MESH':
                    obj.select_set(True)
                    bpy.context.view_layer.objects.active = obj
            bpy.ops.object.delete()
            print("Deleted geometry")
        else:
            mesh = "_Mesh"
        # Export
        file_name = os.path.basename(heavy_glb)
        folder = os.path.dirname(heavy_glb)
        new_file_name = file_name.split(".")[0]+"_"+name+mesh+".glb"
        new_file_path = os.path.join(folder, new_file_name)
        bpy.ops.export_scene.gltf(filepath=new_file_path, export_animations=True, check_existing=False)
        print("Exported action", name, "to")
        print(new_file_path, 3*os.linesep)
    # Finished
    print("DONE")

If you want to give it a try :

  • Open Blender and open a text editor
  • Copy paste the code and edit the heavy_glb path
  • export_mesh = "first_only" means only the first .glb will contain the mesh.

So basically, if you have a man.glb with actions such as idle, run, walk is will export :

  • man_idle_Armature_Mesh.glb with the mesh + the idle action
  • man_walk_Armature.glb with walk action only
  • man_run_Armature.glb with run action only

Up to you then to add the last one to the animation group on BabylonJS side, later on while the game is running :slight_smile:

1 Like

Thanks @labris, this is very helpful! :slight_smile:

The bad news first: https://playground.babylonjs.com/#FL4YYC#22 Using regular loading, a single animation takes on average 256ms. I know it is un-optimised. But you need to improve that by over a factor of 16. So just-in-time lazy loading, probably not a reliable option :frowning:

Anyway, @labris, you mean loading from browser cache, right? I did not think about that. But have a look down below, this might even beat browser cache :o

@Tricotou, thanks. I already have sth like that in place (but currently the other way round; i.e. merging indvidual animations into a single file). Gotta love Blender for its scripting power :+1:


Ok, now the good news (maybe). Can you have a look at this playground please: https://playground.babylonjs.com/#FL4YYC#23

Am I missing something or is this amazingly fast**?

**The time to load the json from the file system is missing, but that is gonna take like how many millisecodns? Also, the animations do not actually play. But I guess that is just an issue with linking the bone TransformNodes. It works in my local project.


****HOLY SHIT. I just did another quick and dirty hack in my local project. Serialized the AnimationGroups of the main character and saved them as a .json string (25mb). Reading that json and using the parsed AnimationGroups as master animations - total time 1600ms :astonished: :partying_face: :champagne:

1 Like

I believe for the test to be comparable the fetch time also should be included.

When I run this PG for the second time (so assets are in browser cache) the time is comparable (less than one second).

I’ll test some ideas how to improve it more.

Here is the PG with indexedDB, comparable results even after hard reload (less than 1 sec) - https://playground.babylonjs.com/#FL4YYC#28

1 Like

Background loading after the first animation - https://playground.babylonjs.com/#C2LL1F#18
The time for the first animation playing is reduced greatly.
image

Version with timeout - https://playground.babylonjs.com/#C2LL1F#17

1 Like

Great job @labris! On top of cutting the loading times, Babylon/the Browser takes care of the caching for you. :slight_smile:

One nice addition to the IndexDB solution is you can select assets to be cached or not.

Another cool thing you can do with IndexDB/browser cache: if you have something like a game with a start screen or main menu, you can start re/caching (parts of) your assets there - like while displaying the Babylon logo :wink:

Also, just to state the obvious. In @labris playground with the co-routines, you can display an animated loading screen. The way I load my assets currently is just everything at once - blocking the browser for the duration of the loading. Negative side effect is that my css loading animation is actually not playing. (*Whenver it properly animates I know the loader has crashed :smiley: ) https://playground.babylonjs.com/#C2LL1F#19 (flashing colors)

With IndexDB we have a problem though: security. It seems you cannot access IndexDBs from a different domain. I guess this is like LocalStorage and, e.g., you will run into troubles on sites like itch.io. They embed your game in an iframe and even, I think, under a different subdomain.

With browser cache, please correct me if I am wrong, but you do not have full control over cache invalidation. You may risk serving a cached asset rather than the new updated one. Therefore they recommend appending versioning get params See e.g..


In terms of answering the thread question:

  • loading many animations just takes long
  • to mitigate that you have to cache animations and/or lazy load them

I mark @labris as solution but with honourable mention of @Tricotou. Hope this is alright for you.

1 Like

FYI:

Disregard everything I said above that involved time measurements in my local project. Neither console.time nor DevTools Network times do seem to indicate reliable data (or I have been using/reading them wrong). Had to measure actual loading time with a bloody stopwatch…

Anyway, using my own Timer class now which seems to work. Based on that and if you are on any external hosting site with CORS shackles you are probably best off with using a normal GLB and let it be cached by the browser.

The JSON approach (i.e. serialise AnimGroups, store as json, load that json instead of a glb) is off the table. For some weird reason, if I load the 25mb json file with the Babylon AssetManager, it takes significantly longer as compared to loading it via an XMLHttpRequest! And then compared to the GLB you do not even win 1 second! Not to mention, that I at least was unable to do xml/fetch requests on itchio. :frowning:


@labris I have just revisited your online tool. Is there a way to round/compress animation data?

Because I am seeing a lot of stuff like this in the GLB file (from Blender):

	"name":"mixamorig:LeftHandIndex2",
	"rotation":[
		0.11024817079305649,
		8.714422250477583e-08,
		-3.5466763392832945e-07,
		0.9939041137695312
	],
	"translation":[
		0.00025584176182746887,
		0.0321047380566597,
		-3.534369170665741e-07
	]

Do we really need rotation values down to the Planck scale? Does this make such a difference? At least translations could be rounded to like 3 or 4 figures. Oh and this

			"scale":[
				1,
				0.9999998211860657,
				0.9999999403953552
			],

I didn’t do that. I have a Blender script that actually is supposed to normalize scales :thinking: I am guessing this happens when exporting.

Here is the example with worker loading (worker loads a file, then pass it as ArrayBuffer), would be interesting to see the results from your Timer class. - https://playground.babylonjs.com/#H9PYXY#5

The JSON approach could be comparable if JSON would be zipped, but anyway this approach leads to more complications comparing to GLB.

Currently only resampling is supported (deduplicating keyframes). I like the idea to cut extra cyphers :slight_smile: . Do you have some loading time measurements with usual and normalized values? Is there a substantial gain regarding the file size?

I did try a worker some months ago when I first realized that my loading times were getting too high. The problem was you actually extend loading times due to the messaging back and forth. Since you have to do the parsing and instantiation on the main thread anyway, the worker cannot actually safe too much loading time :frowning:

Ok, here we go: I load a minimal level, exchange the character mesh / animations between trials and write down the time diff. I do not analyse time series or so. I need big chunks of loading time gains. If there are statistical improvements of a couple of milliseconds, that is not really useful here.

Results are meh. I am using my big character model which went through MeshOptimiser. I then compare this model processed by your tool with all the boxes (first section) ticked. At best I see an improvement of 400ms. Btw, if I use the raw Blender export without MeshOptimiser, I will get a 5_000ms difference.

I also tried “Meshopt | EXT_meshopt_compression”. It reduced filesize by almost 1mb. But loading times didnt change. Also, I had to add meshopt_decoder.js which seems to spin up a wasm module. This might kill off any gains from loading?


Meanwhile I checked the model in Blender. Indeed, I do normalize properly. Except for the root bone, I delete all translation and scale keyframes. Rotations are a different story. Way too many decimal places. Did a couple of tests and just manually rounded the quaternion components to 3 decimal places. Did not make a visual difference.

I went further and checked the exported glb before MeshOptimizer. It is already cluttered with translation and scaling vectors. So it is the Blender gltf exporter that adds them. All the keyframe data are in the bin part; i.e. off limits :frowning:

Aww man, it is not going well. I need to get a cookie now and cheer myself up. :grin:

I would split the big file into smaller ones, load only animations which needed for the first seconds, then continue to load in the background till done.

That is true, the main thing is that your main thread is not blocked by slow loading, and the parsing is quite fast so it doesn’t block the render. You may also create a worker pool which may speed up the loading stage - https://playground.babylonjs.com/#SAXG32#28
I found that for this case 2 workers give the best results, depending on loaded files quantity and size number of workers may be increased.
This function is useful to limit the number of workers depending on the user’s computer capabilities.

1 Like

Since it’s about performance, profile it with the devtools, it would help more with a flame chart or a dump.

Also, if you have a lot of frames, check if this helps (if it does not work, switch to 5.x since if was made a long time ago)

2 Likes