I’ve created many RenderTargetTexture like 300 and disposed them when things were done in WebGPU. But the graphics memory was still occupied in WebGPU.
The same code runs normally in WebGL. Is it a bug of WebGPU?
In WebGPU, when I create 300 RenderTargetTexture, the graphics memory is 3066MiB.
I asked the question in the Matrix channel, as there seems to be a problem on Dawn’s side. Here’s what I posted:
Is it possible that Dawn doesn’t get rid of GPU memory (or delays it) even if we call destroy on a texture?
I have an RTX 3080Ti with 12GB of GPU memory, and in the following examples I start with 2880MiB / 12888MiB (I use nvidia-smi to track memory usage).
The PG simply creates textures and diposes them after 3 seconds:
in this PG, memory goes up to 4180MiB and never goes back down
in this PG, memory goes up to 7739MiB and falls back to the starting value ~2880MiB
in this PG, memory goes up to 7739MiB and never goes back down
Examples 2 and 3 allocate the same amount of memory, but example 2 creates 300 2048x2048 textures, while example 1 creates 1200 1024x1024 textures.
So, it would seem that Dawn only releases memory immediately if the textures are “large enough”?
If so, is it possible to get Dawn to release the memory immediately by some means, otherwise it will be almost impossible to spot memory leaks on our end.
Both samples allocate the same amount of memory, but the second allocates less but larger textures.
Another observation, which really looks like a memory leak to me:
I browse the first link. GPU memory usage increases from 3000Mb to 4500Mb and does not decrease, as described above.
I update the url in the browser bar, adding a “b” to browse the second url. GPU memory increases from 4500Mb to 6000Mb, then drops back to 4500Mb after 3s.
=> there’s already a problem here, when I replaced the url with the second url, the GPU memory allocated when browsing the first link should have been recovered (?)
If I press F5 to refresh the page, I always go from 4500Mb to 6000Mb and back to 4500Mb, the 1500Mb of memory allocated by the first link is still not freed.
I have to close the tab to get the memory back to 3000Mb.
So, it’s not a bug, see the answer from the issue:
It is because of the different memory allocation strategies for the resources that are greater than 4MB and smaller than 4MB on D3D12 and Vulkan backend.
Take D3D12 backend for example,
For the resources that are smaller than 4MB, memory suballocation will be applied, which means such smaller resources will be allocated on a D3D12 heap (=4MB) with CreatePlacedResource(), and currently the heap won’t be released when the resource is deleted so that we can reuse the heap in the next time when we allocate small resources.
It’s possible to disable the sub-allocation feature and make debugging possible. From the issue:
You can run Chrome with the command line parameter "--enable-dawn-features=disable_resource_suballocation" to disable resource suballocation, then all the resources will be allocated on their own pieces of memory which will always be destroyed after the release of the corresponding resource.