I have an issue with loading GLTF files with animations allocating memory but not releasing it when disposed. I think this is technically not a memory leak, since the allocated memory does not increase upon multiple load/dispose operations, but I wonder what exactly is going on and wether it is a bug.
Open playground, using Babylon 4.2.1 (the problem is present in babylon 5 aswell though)
Open chrome dev tools, open “Performance monitor” where you can see the memory consumption
Trigger garbage collection via devtools
The idle allocated memory for me is about 40-45mb, this is basically the “baseline”
Click the “Load/Destroy” button, the gltf gets loaded
After the gltf is loaded, click the “Load/Destroy” button again, the gltf gets disposed
Trigger garbage collection again, the allocated memory is now around 5bm higher than the baseline
Repeating this process, does not increase the allocated memory after unloading and garbage collection it stays at around 50mb, but I can never get back to the baseline 40-45 mb
The issue with this is, I’m creating an application with some different gltfs with animation data. However there are only 2-3 of the 20+ visible at a time. Still the memory consumption increases until every gltf has been visible at least one time and than stays at this level and this is potentially a problem since we are targeting mobile devices with limited memory aswell.
Seems that cache cleaning will remove all other animations from other models in the scene as well.
Is there a way to dispose only what remains from the model?
Probably it is possible with 2 scenes…
Hmm this doesn’t seem to do the trick for me. When my baseline is ~45mb after loading, destroying and triggering the garbage collection my memory consumption is still around 50mb in 4.2.1. When trying with v5 the cache cleaning removes the UI element aswell So there is less memory used but I can’t tell if it’s maybe just from the removed UI element…
I updated the PG to trigger the load/dispose methods by hitting the L-key (make sure click on the scene to set the focus first): https://playground.babylonjs.com/#9108W0#18
Additionally I get an error in the console “this._activeSkeletons.reset is not a function”
A while back I made a local update to the SmartArray class that adds methods pop() and remove() that may be helpful. Here’s an updated PG that removes the skeleton from the _activeSkeletons smart array and should allow it to be GC’ed unless there’s another reference somewhere else. From my tests the heap size gets back down to within a few MB’s of where it started, on both v4.2.1 and on latest v5, but I wonder how it does with 20 different models/skeletons being loaded and disposed like on your app.
EDIT: another simpler solution is to call scene._activeSkeletons.dispose() like below, which sets the smart array’s internal array to a new empty array, dropping the references stored in the old array.
That’s just with two of the same model thou. It sounds like the extra heap usage is accumulating with each separate model loaded, so with 20 different models loaded (e.g. from 20 different URL’s) it becomes more significant than just a few MB’s… I’ll wait and see if @thomasaull can test removing the skeleton from the _activeSkeletons smart array in his project thou since it’s already set up for that larger scenario with lots of different models.
Ok this does seem to do something, in my App I’m doing a test run where I’m loading all 20 characters at once.
Before doing anything with the _activeSkeletons:
Baseline: ~94mb,
Loaded: ~1.2gb
Unloaded: ~1.2gb
doing _activeSkeletons.dispose() drops the memory usage back to ~150mb so this is a huge improvements, although there still seems to be some allocated memory not beeing freed up, but this could be something totally different or an error on my side. So thanks guys, this seems to help a lot! I’m going to try to implement @Blake’s pop/remove methods since this seems to be a bit less doing it with the sledge hammer.
Also it’s strange that 20 of my characters fill the memory up at around 50mb per character when the Testcharacter seems to do 5mb only… not sure what’s going on there
There will be cleaned during the next render frame (when more objects will be added). It is a per frame cache. In your example there is no more meshes so there is no render frame
I’m pretty sure you meant to do scene._activeSkeletons.data = [], not scene._activeSkeletons = [] which caused an error from attempting to call the SmartArray method reset() on a javascript array.
But it also seems like there should be a public API to drop the skeleton reference so that it can be GC’ed, maybe scene.removeSkeleton(skeleton) should handle removing its reference to the removed skeleton?
Otherwise if you don’t do anything with the private _activeSkeletons smart array, then if you have 20 active skeletons and dispose all 20, then you must wait until there are 20 new skeletons all active at the same time before all 20 of the old references are overwritten.
Pretty sure that’s not true since the smart array’s internal js array, data, still retains all of the skeleton references until the array elements are overwritten with active skeletons.
Unless you access the private smart array and call dispose on it or manually set data to an empty array for example, but then you’re having to access private stuff to allow the object references to be dropped without having to wait for them to be overwritten…
Good news, I did some debugging and I could solve the majority of my memory problems. There was a weird issue with reactive properties from vue, which caused some bug filling up the memory. I removed everything which does not need any reactivity and instead of 1GB+ the app now usses more like something in the range of 100mb to 300mb, depending what’s visible on screen.
@Deltakosh I assume, when you’re talking about the next frame, you mean the next frame of an arbitrary skeleton animation not the next rendering frame?
Here’s a screencast where I did some testing with my app, loading 19 characters at once, destroying it afterwards and finally calling _activeSkeletons.dispose when clicking on the “Debug” button. Before doing that, the memory consumption stays at about 100mb and then dropping back to around 35mb after triggering the garbage collection.
In my case this seems like a lot of unnecessary data to keep in memory and I’m not sure whats the reasoning for this?
Memory snapshot shows about 65mb of data being in the _activeSkeletons (assuming the unit is bytes which I’m not sure about):