I have an app that allows loading of multiple scenes… Upon loading a new scene, I check if an existing scene exists and if so I dispose of it using scene.dispose(). This all works visually but when I look at the memory tab on chrome dev tools I see the totalJS Heap size keeps increasing until it eventually runs out of memory and crashes the browser.
Is scene.dispose actually not freeing up resources (meshes, textures, scene object…etc) correctly. Do I need to dispose the engine entirely . ?
here is a snipped of how im handling the scene disposal
function disposeScene() {
if(!scene) return;
return new Promise((resolve, reject) => {
scene.onDisposeObservable.addOnce(()=>{
resolve(scene);
});
scene.dispose();
});
}
async function newScene()
{
if(scene){
engine.stopRenderLoop();
await disposeScene();
}
Hmm. disposing the engine still is causing the heap size to increase. It seems to be occurring with only meshes/textures and not other scene elements like lights and cameras. I also tried disabling browser caching on chrome dev tools but having the same issue. I can also verify that it is infact caching the loaded assets because if I try to load the same scene twice, then my heap size does not increase and the scene loads faster the second time.
Is there some scene/engine caching configuration that I might need to look at. I will keep investigating but any tips would be welcome.
— code —
function disposeEngine() {
if(!engine) return;
return new Promise((resolve, reject) => {
engine.onDisposeObservable.addOnce(()=>{
resolve(engine);
});
engine.dispose();
});
}
async function newScene()
{
if(engine)
{
await disposeEngine();
}
engine = new Engine(params.canvas);
scene = new Scene(engine);
scene.useRightHandedSystem = true;
scene.clearColor = new Color4(0,0,0,0);
Hi all. I made an attempt at creating a test in PG. I conveniently forgot to mention I am using the google draco loader to load .drc files. (yes I know we could use glbs but loading drc files is one of our requirements)
pressing the l key will load a draco mesh. (the bunny).
pressing the l key again will dispose the previous mesh and load a simpler draco mesh (a cube).
note that if you just hit the ‘l’ key over and over the heap just keeps increasing. (or just hold it down). Eventually the browser crashes I’m a little new to the chrome debug tools so perhaps im misinterpreting the heap but im guessing when a mesh is disposed the memory heap should decrease. particularly when going from the heavier bunny mesh to the cube mesh.
My recollection is that there is a KTX memory leak, but it won’t grow unbounded as it’s capped by the memory space of the WASM, which itself is encapsulated in a WebWorker. We had a branch demoing a way to dispose those workers, but we never checked it in due to lack of demand and closed the PR in December, I think. It doesn’t sound like that’s related to this, but I don’t know for sure; I don’t have a lot of familiarity with Draco.
@jeremy-coleman . Interesting API link. We would need to do what you are saying as a workaround. to basically convert our drcs to gltfs using the above mentioned APIs or the ones i mentioned (GLTF Pipeline). We dont want to do this conversion at runtime client side as there would be additional overhead. We could set up a script to batch convert our drc assets to gltf/glb. This is likely the path we will have to take if the memory leak cannot be resolved in a timely manner.
you could potentially do it in a service worker with precaching and end up with better performance than before. chromium has a lot of perf to give you, you just have to trick it into not being so greedy.
as an aside out of my own curiosity, is there some commonly known formula for min-maxing network bandwidth vs compression speed? (a term for me to google)
I think we figured out the issue (thanks to a co-worker for finding the solution)
It seems the draco compression object needs to be disposed as we are making a new instance of it every time we load a model. Ideally for multiple models, you would need to just make once instance and re-use that. Then dispose after all the meshes are loaded. I will need to do further testing to verify but it seems this is the fix.
Yes, the dracoCompression object should be disposed when not needed anymore. That said, if it’s possible, keep the dracoCompression object around until all of the models are loaded instead of creating a new one per model.