How to implement general computing (GPGPU) with webgl

I want to use a custom shader to implement general calculations (hundreds of thousands of matrix calculations, Computer Shader is not my choice because it cannot be enabled in some cases),

I searched for relevant information

  • There is an article describing how to implement it in webgl gpgpu.
  • I have also seen some PG showing the power of GPU Pick

As a newbie, I don’t know how to reproduce it in babylon (it would be best if there is the simplest PG)
, and I need to be able to read it into the CPU for processing.

Thanks for any help!

Maybe you can use a procedural texture with a fragment shader and then you can retrieve the results by reading the pixels of the texture? It’s a bit hacky but that would probably work.

As long as you know the dimensions of your texture you can treat the UV coordinate as a global invokation id like for compute shaders!

Here is a simple PG with a procedural texture:

And here is the documentation: Creating Procedural Textures | Babylon.js Documentation

1 Like

Thanks for the answer, I believe what you said is completely feasible
I don’t understand how fragment matches each pixel of the texture one by one.
In my understanding, maybe it’s a step like this

  1. Suppose I have a 100 x100 matrix, I can save it into a RawTexture
  2. I need to design a fragment shader to extract the pixel coordinates in RawTexture and calculate the results. I am very confused at this step.
  • The fragment shader may not be run 100 x 100 times, and it does not seem to have an integer index corresponding to the pixel coordinates in the RawTexture (I searched and found that gl_FragCoord seems to be able to convert), maybe strictly 100*100 operations is correct?
  • The vertex shader also seems to need to be defined, but I don’t know how to define it. Just 4 points as a plane or 100*100 to achieve one-to-one?
  1. Here I need to use RenderTargetTexture to render, I have questions
  • Do I need to create a mesh? I just use it for calculations. It seems that it can run without using mesh?

Sorry to have so many questions, I’ve been stuck on this concept for a long time. . .

When using a fragment shader, you have the vUV varying that ranges between 0 and 1 for x and y dimension.

When you declare a procedural texture, you declare the size of the texture (like 1024x1024).

So in your fragment shader, if you multiply the vUV by the size of the texture, you get a vec2 that ranges between 0 and the size of your texture.

By converting that vec2 to ints, you will get the pixel coordinate. (This process might not be exact 100% of the time, I haven’t tried it for myself)

When using a procedural texture, you don’t need to care about the vertex shader, BabylonJS does the work for you like in the PG I have linked.

No, the texture can exist of its own without a mesh.

Here is a simple PG that logs the texture data to the console to get you started:

Thank you very much, it looks like procedural texture can be used for calculations :smile:

In my project, I need to render the point cloud and select the rectangle with the mouse to obtain the coordinates of the target point cloud (many points), and my input is a depth map
So here’s what I did (hoping for better advice)

  1. Convert to xyz coordinates through vertex shader
  2. Get the color corresponding to the intensity through the fragment shader

My goal is to get all the xyz coordinates and point cloud coordinates/indexes within the rectangular plane, and with your example I think I will do this

  1. I can’t manipulate texture in vertex shader, so I need to find a way to save xyz to texture in fragment shader (this stumps me)
  2. Use procedural texture to process xyz data to extract the required points

I have a PG that simply simulates some point clouds (but in fact the CPU does not know the xyz coordinates).

I don’t know how to save the xyz from the shader so that the procedural texture can process it.

By the way, I want to break the casserole and ask, can similar things be achieved with custom shaders, because procedural texture seems to be a high-level encapsulation, and I really want to learn lower-level concepts.

I am writing a similar shader. The following is PG

Thanks again! :+1:

Using the vertex shader to compute the positions is fast but I don’t know any way to get them back from the GPU :thinking:

I don’t know how your point cloud is implemented in your project but if each particle can have a unique index, then it is possible to associate each index to a pixel in a texture. Then you would be computing the positions of the points inside the fragment shader (cursed I know ^^) exactly in the same way as in your vertex shader.

Using a filter to get only the points lying in a plane can then be done GPU or CPU side depending on what you want I suppose. If you do it on the GPU, then you can write 1 to the texture when the point is in the plane and 0 otherwise.
On the CPU you just filter your point data with a distance to plane function I guess.

Thank you, I searched for relevant information. It seems that webgl2 Transform feedback buffer can update vertices directly from the vertex shader without doing some magic operations in the fragment shader, but there seems to be no more information.

I’m trying to write relevant code, hope to make progress soon, thanks again

I didn’t know about those! This looks cleaner that using a fragment shader, but I don’t think the logic behind them is exposed by the engine, you might need to use WebGL2 directly :confused:

Yes, we don’t expose usage of transform feedback buffers. Also, note that it’s a WebGL2 feature only (it won’t work in WebGPU).

1 Like

Reason being usually in WebGPU you would rely on compute for those kind of tasks due to the inherent complexity in transform feedback.

Thank you for your enthusiastic replies.
I am working hard to learn and try to use webgl to complete calculations, and then use babylonjs to do similar things.
For compatibility, I’ll stay with webgl api (even though webgl2 looks cleaner) :smile: