Compressed textures use half the GPU ram but twice the javascript RAM


We have a branch (of Cryptovoxels) that loads ktx compressed textures. We’re loading the .dxt.ktx directly (on desktop), not using basis. It works well, we have less stalls caused by glteximage2d, it runs really smoothly, and it uses half the GPU ram for the same scene - which is a fantastic result. We haven’t migrated all our textures to compressed yet, so we expect to see more gains over time.

However - our branch that loads the .dxt.ktx textures uses twice the javascript ram. Is there something weird going on where the compressed texture is being retained in javascript? Is there something I can do to purge the javascript ram once the texture has loaded? I tried engine.doNotHandleContextLost = true but it didn’t seem to make a difference. I’m digging into the texture loading code - but wondering if I’ve missed something obvious.


1 Like

@bghgary and @Evgeni_Popov might have an explanation ?


The only place in the code that I see that allocates CPU memory for KTX1 is this: Babylon.js/khronosTextureContainer.ts at f45f9664f9498e43354a00012690f69f75544633 · BabylonJS/Babylon.js · GitHub

This data is given directly to WebGL.

Are you loading from a url or from a buffer?

The problem is that the GC will kick whenever the browser decides and you have no way (to my knowledge) to control that…

The decompressor can consume 2x more memory as part of its normal working and the browser is free to not reclaim this memory if not necessary. For eg, if you still have enough free memory even after the decompressor has finished working, no GC will occur. So, that’s not because you see more memory consumption when using the decompressor that there is something leaking: it’s only that the decompressor is using more memory to do its work and the browser did not reclaim it. Of course, it’s also possible there’s a leak somewhere but you can’t really know it by just looking at the memory consumption with and without the decompressor.

One way to know it for sure would be able to trigger the GC by hand to be sure all reclaimable memory is actually reclaimed, but I don’t think this is possible (even for debugging purpose).

[…] For Chrome, according to this page Fix Memory Problems  |  Chrome DevTools  |  Google Developers, displaying the “JavaScript memory” counter in the task manager could do it, as this number (the number in parenthesis) “represents how much memory the reachable objects on your page are using”.

For Chrome, it seems you can start it with the --js-flags="--expose-gc" parameter and then be able to use window.gc() to trigger a GC (I have not tested it).

Also, the trashcan button on the Performance tab should do a garbage collect when clicked.

Now I realize you may already have done this and noticed the memory increase after having triggered the GC by hand, in which case you can just forget what I have said above!

KXT files are normally much bigger than JPGs. They are also probably bigger than PNG, since KXT files have their mipmaps pre-generated inside. Could this be a caching “feature”, maybe for re-use?

1 Like

There is no decompressor for KTX1 files. I think you are talking about KTX2+BasisU files.

Well, I was saying “decompressor” for whatever the code that is parsing KTX1 files is doing :slight_smile:

1 Like

We also noticed that in Chrome, having the memory profiler running in dev tools was actually consuming a big chunk of memory and retaining it on refresh! We had to close dev tools, THEN refresh to clear it. It was throwing off our attempts to hunt down memory leaks.

1 Like

Hey thanks for everyones posts here. The situation was way more complicated than I expected, and we’re still working on our profiling tools so we can work out exactly what’s going on. As a positive, we’re getting way less dropped frames due to textures being uploaded to the GPU, the world runs a lot smoother, and we get less OOM crashes on iOS. Massive job transcoding all our textures, but we got there. :slight_smile: I’ll update this post once I work out what’s going on.