Improving mesh load time at high fps

I have 300 frames of a moving mesh that is supposed to play at 30fps (giving a 10s “video”). So each frame is basically made up of a .drc and .jpg file for the geometry and texture data.

The problem I have been struggling with is how to bring the load time for each frame to the minimum (so that I can maximize my fps to as close to 30 as possible).

Current App Workflow (in brief)

For each frame:

  1. Retrieve .drc and .jpg files and process the raw data into objects.
const geometry = <ArrayBuffer> await Tools.LoadFileAsync(geometryResUrl); 
const decodedGeometry = await this.draco.decodeMeshAsync(segment.rawGeometryData)

const texture = new Texture(textureResUrl, scene);
const material = new PBRMaterial(`Material${frameId}`, scene);
material.albedoTexture = texture;
  1. Store the geometry and material in our application buffer.
const frame = new MyFrame(decodedGeometry, material);
this.buffer.push(frame);
  1. When it is time to display that particular frame (and checking the texture data is loaded), create a mesh and add to the scene while destroying the previous frame’s mesh.
const mesh = new Mesh("dracoMesh" + frameId, this.scene);
const geometry = new Geometry("dracoGeometry" + frameId, this.scene, frame.decodedGeometry);
geometry.applyToMesh(mesh);
mesh.material = frame.material;

scene.meshes[prevFrameIdx].dispose(true, true);
scene.meshes.push(mesh);

Sample Load Times
If I simply load geometry (without any texture), I am able to achieve 30fps where each frame takes only about 20-30ms to load (most of the time taken is for Draco decoding, network load time is negligible as it is on localhost).

However, if I have the texture (code as above), each frame takes about 100-200ms.

For size reference, geometry (.drc) = ~40-70KB / texture (.jpg) = ~100-200KB.

Questions
My key question is - where best to start optimizing the workflow?

For instance, I have these ideas in mind and I am not sure which of these is likely to give a good improvement in load time…

EDIT: I did more profiling (using Chrome’s “Performance” tab) and realized WebGL’s texImage2D call is taking a significant portion of the processing time (70+%).
Any idea if a better GPU would help in this case? And software wise, is there any code optimization that I can consider using? (E.g. I read about using texSubImage2D instead of texImage2D, but this call is within the Babylon.js framework and I am unsure how to change that)

  1. Re-use objects instead of re-creating them for each frame. E.g. Create just one Mesh (and update its Geometry and Texture data for each frame dynamically), OR Create just one Mesh, Geometry and Texture (and update the bare minimum dynamically). [Note: My initial preference was to re-create the objects because the code seems cleaner this way but I am not sure if there is a trade-off in efficiency.]

  2. Maximize asynchronous processing. From my understanding, currently the network retrieval and Draco decoding tasks are done asynchronously. I read somewhere that initialization of the WebGL stuff can only be done on the main thread. So I am not sure if there is anything else that I can make async. Is there any Texture-related code that I can make async?

  3. Getting a better GPU. I am not sure if this would help because my current GPU usage is not even half the capacity (utilization max ~40%, memory max 2.3GB/8GB). I am using Nvidia 3070. [Note: This project only needs to run on my workstation so I can incorporate hardware optimizations too if there is anything I can do.]

  4. If all else fails, consider using glTF format (?). I have no experience in glTF but after some readings I have a hunch that it may suitable for dynamic meshes (it supports animation which could greatly exploit the temporal redundancy in the data across frames to bring the storage/processing demand down (?)). [Note: This is not an ideal approach because I prefer not to have to pre-process the data (not the scope of this work), but I just wanted to hear if anyone has experience in using glTF for similar use cases.]

Any advice on these or any other ideas would be truly appreciated. Please be as verbose as you can, I have a lot to learn about Babylon.js. Please help a newbie out.

Welcome to the forum!

Did you try video texture and compress the jpg files as an mp4? I think video can be streamed a little in advance to keep up with the intended framerate.

Hi Cedric, thank you for your response.

From my understanding, VideoTexture is intended to play a video on a static geometry (like projected on a plane).

However, my geometry is moving as well. i.e., Each frame of the VideoTexture needs to coincide with the next frame of the geometry (which is another static mesh). So 300 .jpg -maps to-> 300 .drc (sequentially) instead of 300 .jpg → 1 .drc.

Have you come across any examples or resources like that? I couldn’t find any unfortunately

EDIT: I did more profiling (using Chrome’s “Performance” tab) and realized WebGL’s texImage2D call is taking a significant portion of the processing time (70+%).

Any idea if a better GPU would help in this case? And software wise, is there any code optimization that I can consider using? (E.g. I read about using texSubImage2D instead of texImage2D, but this call is within the Babylon.js framework and I am unsure how to change that)

texImage2D is the function that transfer from cpu side ram to the gpu side. its main limitation is pci express transfer rate.

VideoTexture is a texture…just like any other texture. you can use it with any geometry.

I believe it’s taking 70+% because of active wait. Angle/driver is waiting for transfer to complete before continuing. Can you try to always preload 2 or 3 textures in advance ? I think this would mitigate the time spent waiting

Hi, yes I preload some textures in advance but it will eventually stall again (ie., waiting on the next frame texture) if I try to stream at higher fps.

Would using RawTexture.update() or dynamicTexture.update() (instead of instantiating a new Texture() each frame) help reduce the calls to texImage2D? E.g. if it uses texSubImage2d instead. (Although I doubt so)

ping @sebavan

you can project a video texture on a moving mesh btw. cedric implied that in his response but didnt say it explicitly, so maybe it wasnt clear. you should do that.

Let s add @Evgeni_Popov who add to deal with video frame data sync in the past I believe.

Yes, using a VideoTexture should work, however, you would have to manually control the video to advance to the frame you need for the current frame. Regarding the geometry part, you should just update the vertex buffers, it would be faster than recreating a mesh entirely. I worked on volumetric video rendering some time ago, and each frame, vertex positions/normals as well as texture were updated and it was working at 60fps (I can’t share code, though). Note that it wasn’t using Draco compression, the geometry data was uncompressed and already formatted for GPU upload.