Graphics memory problem of RenderTargetTexture in WebGPU?

I’ve created many RenderTargetTexture like 300 and disposed them when things were done in WebGPU. But the graphics memory was still occupied in WebGPU.

The same code runs normally in WebGL. Is it a bug of WebGPU?

In WebGPU, when I create 300 RenderTargetTexture, the graphics memory is 3066MiB.

After 3 seconds, all RenderTargetTexture are disposed. And the the graphics memory is 1500+MiB.

In WebGL, the the graphics memory is about 500 MiB, which is corrent.

1 Like

Ohhhh interesting, @Evgeni_Popov any idea ?

I asked the question in the Matrix channel, as there seems to be a problem on Dawn’s side. Here’s what I posted:


Is it possible that Dawn doesn’t get rid of GPU memory (or delays it) even if we call destroy on a texture?

I have an RTX 3080Ti with 12GB of GPU memory, and in the following examples I start with 2880MiB / 12888MiB (I use nvidia-smi to track memory usage).

The PG simply creates textures and diposes them after 3 seconds:

  • in this PG, memory goes up to 4180MiB and never goes back down

  • in this PG, memory goes up to 7739MiB and falls back to the starting value ~2880MiB

  • in this PG, memory goes up to 7739MiB and never goes back down

Examples 2 and 3 allocate the same amount of memory, but example 2 creates 300 2048x2048 textures, while example 1 creates 1200 1024x1024 textures.

So, it would seem that Dawn only releases memory immediately if the textures are “large enough”?

If so, is it possible to get Dawn to release the memory immediately by some means, otherwise it will be almost impossible to spot memory leaks on our end.

3 Likes

Follow up I posted to the Matrix channel:


I was able to reproduce the same behavior outside Babylon:

Both samples allocate the same amount of memory, but the second allocates less but larger textures.

Another observation, which really looks like a memory leak to me:

  • I browse the first link. GPU memory usage increases from 3000Mb to 4500Mb and does not decrease, as described above.
  • I update the url in the browser bar, adding a “b” to browse the second url. GPU memory increases from 4500Mb to 6000Mb, then drops back to 4500Mb after 3s.
    => there’s already a problem here, when I replaced the url with the second url, the GPU memory allocated when browsing the first link should have been recovered (?)
  • If I press F5 to refresh the page, I always go from 4500Mb to 6000Mb and back to 4500Mb, the 1500Mb of memory allocated by the first link is still not freed.
  • I have to close the tab to get the memory back to 3000Mb.
4 Likes

Confirmed to be a problem in Dawn, I created an issue:

https://issues.chromium.org/issues/377410074

5 Likes

Great job! :+1:

So, it’s not a bug, see the answer from the issue:


It is because of the different memory allocation strategies for the resources that are greater than 4MB and smaller than 4MB on D3D12 and Vulkan backend.

Take D3D12 backend for example,

  • For the resources that are smaller than 4MB, memory suballocation will be applied, which means such smaller resources will be allocated on a D3D12 heap (=4MB) with CreatePlacedResource(), and currently the heap won’t be released when the resource is deleted so that we can reuse the heap in the next time when we allocate small resources.
  • For the resources that are larger than 4MB, memory suballocation won’t be applied, the larger resource will be created with CreateCommittedResource() and its memory will be freed at the same time it is destroyed (https://source.chromium.org/chromium/chromium/src/+/main:third_party/dawn/src/dawn/native/d3d12/ResourceAllocatorManagerD3D12.cpp;l=473).

I’ve tested it: if you allocate too much memory, at some point the heap memory will be freed up.

So there is no leak, but this behavior makes it impossible to debug memory leaks in our applications…

2 Likes

I’ve tested repeatedly creating and destroying images, and indeed the memory would be overwritten, but it is really inconvenient for debugging.

It’s possible to disable the sub-allocation feature and make debugging possible. From the issue:

You can run Chrome with the command line parameter "--enable-dawn-features=disable_resource_suballocation" to disable resource suballocation, then all the resources will be allocated on their own pieces of memory which will always be destroyed after the release of the corresponding resource.

1 Like