Baked animation performance

Ah yes sorry my bad!

There’s a matrix per bone, so 4 reads per bone. I have fixed the doc.

1 Like

What was the consensus here? Is there anything that’s being done in the UFO demo that’s fundamentally different than what babylon.js already does?

Hi @arcman7

There are some good findings guided by @Evgeni_Popov. But still many things need to be done to make sure a fair apple vs apple comparison. Just I don’t have enough time for it.

My findings:

  • PRB Material was not the cause of slow down. I tried to use a basic standard material. No noticeable performance drop.
  • I applied both baked animation as texture vs as data array. No performance difference. So the slow down was not caused by texture sampling call in the shader.

Is three.js faster as shown in the UFO demo? I don’t know yet because of the following need to be answered, but I don’t have time. I would love to have full time job working on WebGL (#opentowork :grin:).

  • Do the victim in UFO model have much lower poly size? The model file is in a specific format not supported in babylon.js. I haven’t looked into how to load it.
  • What is the FPS is the UFO demo? FPS profiler is not embedded in the demo.

My assumption:

  • UFO demo uses model with lower poly size. I notice even if I load static models as thin instance in babylon.js. FPS drops below 60 when I have ~10 million faces/vertices on my laptop (can’t remember exact number). So I assume the main factor is the total number of faces/vertices. And of course sampling baked animation slows down the GPU calculation. So shader involves anim would be slower than shader code for same amount of static meshes.
  • I vaguely remember last time when I checked UFO code, they have LOD applied on the models. It might be worth trying the mesh simplification in babylon.js when camera is zoomed out.
  • Another possibility is to use few CPU calculated anim for large number of thin instances when camera is zoomed out. Imagine the camera is far away, you see some little dots anyway and they have several synchronized animations (walking, running, poking). But when camera is close, you show the few hundred thin instances having animations with a large variety. I don’t want to go down this routine if there is better approaches.

Yeah I have no idea what’s going on with those .buf files. I would take a better look, but it’s all just minified code, I don’t see any sort of source-map. I’m with you in that this would take a while to sort through in getting a fair comparison going :confused:

Their mesh size is 260 triangles while the arachnid is 1794.

In addition, they optimized the calculation for their use case:

  • only rotation and position for bones (no scaling). So they only do two texture reads to get the quaternion and position values, while we do 4 reads to build a 4x4 matrix.
  • only two bones max

So they do 2*2=4 texture reads in total, while we do 4*4=16 because the arachnid has NUM_BONE_INFLUENCERS = 4.

1 Like

Thanks @Evgeni_Popov. I think the samurai model I used has 740 triangles. So the performance sits in between of the UFO victims and arachnid.

  • only rotation and position for bones (no scaling). So they only do two texture reads to get the quaternion and position values, while we do 4 reads to build a 4x4 matrix.

This is very interesting. I assume most scenario only rotation and position are required? So I can also try 2 reads instead of 4 reads in my custom shader?

  • only two bones max

Do you refer to how many weighted bones can influenced a vertex?

Yes, if you are using a custom shader, you can do the texture reads and the math yourself. You can have a look at what they are doing with Spector.js.

Yes indeed.