Data transfer from compute shader to renderer

When I started learning about compute-shaders a few weeks ago, I looked at the Babylon examples in the docs and realized that I should first get a basic idea of WebGPU and WGSL to understand what is going on.
Now I have a project where a compute-shader (set up outside of the Babylon classes) calculates new positions of a large particle system each frame and then I let Babylon render the particle meshes with an SPS.
The bottleneck is the GPU → CPU data shoveling (mapping a buffer), which I knew is slow, but I didn’t know how crazy slow… Obviously, it feels wrong to pull data from the GPU just for Babylon to put it back to the GPU to render it.

My question is: If I would use the compute-shader class within Babylon, can I avoid this bottleneck and render the SPS with data directly from the GPU buffer?

I tried to understand the “Boids” example again, but honestly, it’s the same mystery to me as it was before I knew anything about WebGPU! :joy: It looks like it renders white triangles directly from the shader?

1 Like

@Evgeni_Popov would be the best for this issue :slight_smile:

It should work.

I think this post should help you regarding the setup of a vertex buffer that can be updated by a compute shader:

@Evgeni_Popov

Thanks for the feedback!

Now I understand the Boids example. Three questions:

-1- What is the equivalent of getting the vertex buffer data into the mesh for an SPS? Currently I set the positions by defining the SPS.updateParticle function. Would it be something like SPS.mesh.setVerticesBuffer(myVertexBuffer, false)? In the Boids example, using the “magic” boidMesh.forcedInstanceCount, the buffer size info is somehow used to get things right. How one would do this for an SPS is not clear to me.

-2- Maybe an SPS is not even the optimal solution for this task? (The task is to calculate the interaction for a large number of objects using a compute shader and then update their positions.) I have not much experience with this yet, but using SPS.setParticles() for a standard IcoSphere with 4 subdivisions (if I didn’t miscalculate this should be 1280 triangles), I hit the 16 ms zone (on Apple M1 Max) for about 500 spheres, that is 640,000 triangles. Is this to be expected or is it because the CPU is involved? …which brings us back to the original question…

-3- This is a more general question. I wonder what possibilities or restrictions there are to let BabylonJS’s buffers/shaders cooperate with a separate compute shader. If at all possible, I would like to avoid the extra interface layer of BJS creating a compute shader and rather use the WebGPU buffer/pipeline/commandEncoder code I’ve already written.
I’m able to hook on to the BJS engine._device and create my own compute shader and it works. (Please let me know if there is something I have to be careful about here. There is a TypeScript issue for the GPUBindGroupLayout about the ‘__brand’ property, but it can be patched.)
The problem now is how to do something like in the Boids example

this.vertexBuffers = new BABYLON.VertexBuffer(engine, this.particleBuffer.getBuffer(), "a_particlePos",...)

if particleBuffer is not a BABYLON.StorageBuffer but a normal GPUBuffer (with usage: GPUBufferUsage.STORAGE of course) in my own compute shader.

Yes, that should do it.

To my knowledge, SPS does not use instance rendering, it just uses a big vertex buffer. If you have 5 cubes in your SPS, you will have the positions of all the vertices of all the 5 cubes in the vertex buffer.

If the objects are all of the same type (say, a sphere), or you only have a few different types of objects (sphere, cube, …), maybe using thin instances would be better, because in this case you only have to update a matrix to move/rotate an object, whereas with SPS you will have to update all the vertices of the objects.

You’re a bit on your own if you want to mix Babylon with custom WebGPU code… I would advise you to port your custom WebGPU code to Babylon, as you should normally have little to do as the compute shader class in Babylon is a very thin wrapper around the WebGPU compute shaders.

@Evgeni_Popov

maybe using thin instances would be better, because in this case you only have to update a matrix to move/rotate an object, whereas with SPS you will have to update all the vertices of the objects.

Makes sense. I’ll try it out. How to update the buffers in the shader to be used for the thin instances transformation matrices is not clear to me. Is there any documentation on the structure of those matrices? (For a pure translation I figured out that for a length 16 array representation of the 4x4 matrix, the x, y, z components are at index 12, 13, 14, respectively. But how are rotations and scalings represented?)

I would advise you to port your custom WebGPU code to Babylon

In my compute shader pipeline, I use
commandEncoder.copyBufferToBuffer(...)
calls.
How would I do this in Babylon?

The thin instance matrices are regular matrices, that you can create with BABYLON.Matrix.Compose(scale, rotation, translation) for eg. You can look at the code in Maths/math.vector.ts to see the implementation.

Regarding copyBufferToBuffer, what do you use it for? If it is to read data back to the CPU, we have a StorageBuffer.read method which is doing it under-the-hood. We try not to expose methods that are too low level if it’s not necessary, but if there are use cases for them we can think about it.

Regarding copyBufferToBuffer , what do you use it for?

No, reading data back to the CPU is what I want to avoid! :wink:
I use it to copy one GPUBuffer to another one on the GPU. For example, in a time stepping scheme there are parallel computations done in the compute shader to calculate new positions based on old positions. Only after this is done, the whole new position buffer is copied to the whole old position buffer.

All I need is to get the data of such a position buffer into the matrix buffer for the thin instances without getting the CPU involved. I will try to get this done… cannot be so hard… :sweat_smile:

@Evgeni_Popov

The thin instances are really a completely different game compared to the SPS! I can easily render 20k interacting particles now! Thanks for the tip! :sparkles:

I wonder how the alpha channel for the colorBuffer behaves when I do
mesh.thinInstanceSetBuffer('color', colorBuffer, 4)
the alpha channel only has an effect if material.alpha < 1. Are material.alpha and the alpha channel added together? (For alpha channel = 0 the rendering is strange.)

Is there a way to do PBR materials for thin instances? I get these WebGPU errors if I try

Back to the original topic:
Regarding the data transfer from GPU to CPU, I actually made a mistake in the performance measurement. The additional time needed for the transfer is actually small compared to the time from the submission of the command encoder to the device queue until the work is done.
Now I find that independent of if there is actually any work done in the compute shader, from the submission of the command encoder until device.queue.onSubmittedWorkDone() it takes at least 3 - 4 ms. Of course, this will depend on the hardware, but I wonder if this is to be expected?

1 Like

Alpha from the material is multiplied with alpha from the instance color.

Would you have a repro for the alpha / PBR problems? PBR materials do work with thin instances.

onSubmittedWorkDone is a promise, so it will be executed in a micro-task after the main javascript execution. You can’t use it to time GPU commands.

So I’d say it’s expected, it’s the latency you’ll experience before being notified by onSubmittedWorkDone, but it’s not representative of the actual time spent on the GPU.

@Evgeni_Popov

I made a minimal repro

Tested on Apple M1 Max and Apple M2
Sonoma 14.1.1, Chrome 119

Error happens only for WebGPU, not for WebGL2.
If there is only one render() call (no render loop), then there is no error, but nothing gets rendered.

Using StandardMaterial (toggle lines 65/66 in src/babylon.ts) everything works fine.

If material alpha = 1, then channel alpha (line 33) has no effect except when it’s zero (then something weird happens).

@Evgeni_Popov

onSubmittedWorkDone is a promise, so it will be executed in a micro-task after the main javascript execution. You can’t use it to time GPU commands.

Then I’m at a loss how we should time GPU commands. Afaik,
await onSubmittedWorkDone and await mapAsync are the only ways to find out when the GPU finished the work. How else would we query this?

I time these GPU commands in the animation loop where not much else is done except scene.render, and average over many frames. Is there another way? Or is the GPU and the data traffic to it a big black box? :sunglasses:

You should normally don’t need to know when some GPU work finishes.

If you need some work to be finished before reusing it afterwards, it will happen automatically simply by the sequence of your calls and the fact that your are going to reuse a texture (or a buffer) as an input to a shader (for eg).

1 Like

Is it not a valid question to ask how long a compute shader, for example, takes to update some particle positions in a render loop? If that takes 5 ms or 50 ms is useful information, I think.

Yes, you can know the total GPU time during a frame by looking at the “GPU frame time” in the inspector:

For this to work in WebGPU, you will have to start chrome with the --enable-dawn-features=allow_unsafe_apis flag (or --enable-webgpu-developer-features for more precise timing).

We don’t provide timing at the compute shader level at the time.

I made a minimal repro
GitHub - h-a-n-n-e-s/thin_instance_pbr

@Evgeni_Popov

Is there any news for the “thin instance + pbr material” issue with WebGPU?

Sorry, I missed that one.

Here’s the fix:

4 Likes