Browser memory heap size keeps increasing after using scene.dispose()

I have an app that allows loading of multiple scenes… Upon loading a new scene, I check if an existing scene exists and if so I dispose of it using scene.dispose(). This all works visually but when I look at the memory tab on chrome dev tools I see the totalJS Heap size keeps increasing until it eventually runs out of memory and crashes the browser.

Is scene.dispose actually not freeing up resources (meshes, textures, scene object…etc) correctly. Do I need to dispose the engine entirely . ?

here is a snipped of how im handling the scene disposal

function disposeScene() {
if(!scene) return;
return new Promise((resolve, reject) => {
scene.onDisposeObservable.addOnce(()=>{
resolve(scene);
});
scene.dispose();
});
}

async function newScene()
{
if(scene){
engine.stopRenderLoop();
await disposeScene();
}

scene = new Scene(engine);

}

Some resources are cached by the engine (like the compiled shaders).

To completely wipe all the resources, the ultimate way is to call engine.dispose() :slight_smile:

This is what I suspected, sounds like disposing the engine is probably best. Thanks for the response!

-Anupam

1 Like

Hmm. disposing the engine still is causing the heap size to increase. It seems to be occurring with only meshes/textures and not other scene elements like lights and cameras. I also tried disabling browser caching on chrome dev tools but having the same issue. I can also verify that it is infact caching the loaded assets because if I try to load the same scene twice, then my heap size does not increase and the scene loads faster the second time.

Is there some scene/engine caching configuration that I might need to look at. I will keep investigating but any tips would be welcome.

— code —

function disposeEngine() {
if(!engine) return;
return new Promise((resolve, reject) => {
engine.onDisposeObservable.addOnce(()=>{
resolve(engine);
});
engine.dispose();
});
}

async function newScene()
{

if(engine)
{
  await disposeEngine();
}
engine = new Engine(params.canvas);
scene = new Scene(engine);
scene.useRightHandedSystem = true;
scene.clearColor = new Color4(0,0,0,0);

}

Ideally I would need a repro in the playground to help you further

I can not repro on my local testing :frowning: @Anupam_Das could you provide a repro ?

Hi all. I made an attempt at creating a test in PG. I conveniently forgot to mention I am using the google draco loader to load .drc files. (yes I know we could use glbs but loading drc files is one of our requirements)

pressing the l key will load a draco mesh. (the bunny).
pressing the l key again will dispose the previous mesh and load a simpler draco mesh (a cube).

note that if you just hit the ‘l’ key over and over the heap just keeps increasing. (or just hold it down). Eventually the browser crashes :wink: I’m a little new to the chrome debug tools so perhaps im misinterpreting the heap but im guessing when a mesh is disposed the memory heap should decrease. particularly when going from the heavier bunny mesh to the cube mesh.

Any assistance would be greatly appreciated.

Thanks!
Anupam

Adding @bghgary as IIRC there are some memory from the draco loader who are currently known to be an issue and under investigations ?

Thanks for looking into it. Playing devils advocate… Workaround would be to batch convert all of our drc files to glbs using the draco decoder CLI GitHub - google/draco: Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.. Then converting obj to glb using cesium gltf pipeline. The requirement is to insure compatibility with existing assets without having to require the artists to re-export and re-configure stuff. However, if we could support drc natively within the app, this would be best. ( we plan on supporting glb for newer assets) . We are natively support drc currently… with no issues except for the memory leak :slight_smile:

I’m not aware of an issue with the draco loader. We do have issues with the KTX loader though. Maybe it’s related since they are both WASM?

@syntheticmagus is that issue ringing a bell ? I can not find it back…

My recollection is that there is a KTX memory leak, but it won’t grow unbounded as it’s capped by the memory space of the WASM, which itself is encapsulated in a WebWorker. We had a branch demoing a way to dispose those workers, but we never checked it in due to lack of demand and closed the PR in December, I think. It doesn’t sound like that’s related to this, but I don’t know for sure; I don’t have a lot of familiarity with Draco. :upside_down_face:

Could you use uber or three loaders to decode, transform to gltf format, then load into babylon?

loaders.gl/modules/draco at master · visgl/loaders.gl · GitHub .

Uber also has loaders for arrow, among many others that im sure lots of stem apps would benefit from, beyond this use case.

1 Like

@jeremy-coleman . Interesting API link. We would need to do what you are saying as a workaround. to basically convert our drcs to gltfs using the above mentioned APIs or the ones i mentioned (GLTF Pipeline). We dont want to do this conversion at runtime client side as there would be additional overhead. We could set up a script to batch convert our drc assets to gltf/glb. This is likely the path we will have to take if the memory leak cannot be resolved in a timely manner.

you could potentially do it in a service worker with precaching and end up with better performance than before. chromium has a lot of perf to give you, you just have to trick it into not being so greedy.

as an aside out of my own curiosity, is there some commonly known formula for min-maxing network bandwidth vs compression speed? (a term for me to google)

I think we figured out the issue (thanks to a co-worker for finding the solution)
It seems the draco compression object needs to be disposed as we are making a new instance of it every time we load a model. Ideally for multiple models, you would need to just make once instance and re-use that. Then dispose after all the meshes are loaded. I will need to do further testing to verify but it seems this is the fix.

Yes, the dracoCompression object should be disposed when not needed anymore. That said, if it’s possible, keep the dracoCompression object around until all of the models are loaded instead of creating a new one per model.

1 Like