Compute Shaders Order Execution

Hello, everyone. Sorry, if this question is silly or was already answered, but is it possible to control compute shaders order execution, to call next compute shader only when previous done his work? I am trying to make this demo: https://playground.babylonjs.com/#XMXDAA#25 use webgpu compute shaders to compute acceleration, pressure, density and position in parallel. I managed to bring successfully computeDensityAndPressure() method to compute shaders, but I have problems with computeAcceleration() that first time, my shader for acceleration receives particle data where density equals zero, so it causes zero division. To “fix” this, I added corresponding if statement, but after that water start to jump from one side of box to another, slowing down…

If you have any advice or recommendations, please share. Thank you for your attention.

Adding @Evgeni_Popov , who has a better understanding on compute shaders.

1 Like

Each dispatch is automatically synchronized, so the next one doesn’t start until the previous one has finished.

So your problem is probably not a synchronization problem, but something else.

PIX can help you understand what’s going on (if you’re running Windows), and this small gist can help you configure PIX.

1 Like

Okay, thanks for reply I will try this. Also, is it correct, that code that is after dispatch() call will not be executed until compute shader will done his work?

No, javascript code will continue executing and won’t wait for dispatch to finish. The dispatch call will send work to the GPU, that will be executed in parallel to the javascript code.

On this topic, is it possible to put a “barrier” in the javascript code to wait for dispatch execution to complete? Do we have to put the dispatch in a promise? Or should it be done directly in the compute shader code with something like workgroupBarrier() ?

There’s no callback mechanism that would call javascript when all the work from a dispatch is finished.

You should normally don’t need such mechanism, or if you need it, it probably means you want to retrieve a texture or a buffer filled by a compute shader, and in that case you will get a promise by the read(texture/buffer) call, which is a synchronization point.

Ok great ! Thank you for the answer, it confirms that I am implementing things correctly.

On my specific case, yes I needed such mechanism and I have implemented it as you describe it.
Basically, I am solving the 2D wave equation with the finite-difference method in a compute shader, and after the dispatch I am retrieving the data from a storage buffer with a read() call to update the vertex buffer of a 2D mesh.
Works great so far !

Can’t wait to see the final result!

Regarding:

You could directly update the vertex buffer used by the mesh from the compute shader, avoiding a costly read-back from the GPU to the CPU.

That’s what is done in the Boids demo here: https://playground.babylonjs.com/?webgpu#3URR7V#186

Hello again. Am I right that we can do the same for vertex buffers that are used in FluidRenderingObject.vertexBuffers?

Yes, a vertex buffer can be updated in a compute shader (that’s what the Boids example is doing).

1 Like

I will give a try, thanks for the idea. A great profiling exercice in perspective :slight_smile:

However, we need to manually recompute the normals of a mesh, right? It is not going to be done automatically by SetVerticesBuffer ?

Indeed, the normals won’t be recomputed. But you could do it on the GPU instead of passing a vertex buffer, if that is an option to you.

I am not sure to fully understand, do you mean we could compute the normals directly from the compute shader, store them for example in a storage buffer and pass them externally to the vertex buffer via “updateVerticesData()” ?

Currently I am doing it that way:

storage_buffer.read().then((res) => {

this._geom.updateVerticesData(BABYLON.VertexBuffer.PositionKind, new Float32Array(res.buffer));
var new_normals = new Float32Array(this._vertexBuffer_byteLength); 
BABYLON.VertexData.ComputeNormals(new Float32Array(res.buffer), this._geom.getIndices(), new_normals);
this._geom.updateVerticesData(BABYLON.VertexBuffer.NormalKind, new_normals);

 });

“storage_buffer” is the output from the computer shader, “this._geom” is the mesh of the 2D plane. It is working in principle, but recreating the “new_normals” array each update call is certainly not the most efficient way to proceed !

No, what I meant is that you would not pass a vertex buffer with the normals, but compute a normal directly in the fragment shader. If you are using the standard or PBR material, that’s what you automatically get if you don’t provide a vertex buffer with normals.

So, to test it in your case, simply don’t provide the normal vertex buffer and see if it works for you.

Else, you could basically do in a compute shader what ComputeNormals does, to avoid a round-trip from the GPU to the CPU to get the updated vertices. It’s probably a bit of work, though.

Hello, once again. I have a question about PIX. Am I right that I can watch my shaders variables using this tool? And to do this I need to use this instruction from gist:

If you want the debug markers to work inside PIX, you need to perform those steps: https://dawn.googlesource.com/dawn/+/refs/heads/main/docs/debug_markers.md

This link seems to be broken.

NOT_FOUND: Requested entity was not found
myEmail doesn’t have access to this page or the resource was not found. Try switching accounts

Is it possible to get a new link? Thank you.

No, this file allows to have some debug information in the report, not to watch variables.

Here’s the fixed link:

https://dawn.googlesource.com/dawn/+/refs/heads/chromium/4479/docs/debug_markers.md

Here’s the main doc page of PIX, if that can help:

1 Like

Thanks. But is it possible somehow to watch variables in each compute shader instance?

I don’t think it’s possible. There’s no debugger for the shader (compute or vertex/fragment) code.

Several people are working on a debugger for WebGPU, but it’s not quite there yet.

1 Like

No, what I meant is that you would not pass a vertex buffer with the normals, but compute a normal directly in the fragment shader. If you are using the standard or PBR material, that’s what you automatically get if you don’t provide a vertex buffer with normals.

So, to test it in your case, simply don’t provide the normal vertex buffer and see if it works for you

Hi, I would like to get back to that. I am now doing

this._vertexBuffers_update = new BABYLON.VertexBuffer(this._engine, this._vertexBuffer_storage_update.getBuffer(), BABYLON.VertexBuffer.PositionKind, true, false);
this._geom.setVerticesBuffer(this._vertexBuffers_update, true);

and it works well to update the mesh with the new vertex position data, however as expected the normals are wrong and never recomputed.

Based on what you are saying, I should not pass a vertex buffer with normals, but how to do that correctly? I think this is what I am doing but in my case, if I open the inspector, I get an error in the console log saying that NormalKind returns a Null array and that normals cannot be computed. So it seems that the fragment shader is doing nothing to recompute the normals. Is there a way to force the shader to recompute normals?

Thanks