All-
I am working on a WebGPU project which solves a set of PDE’s for ocean waves (2D horizontal Navier-Stokes type model), and would like to bring in Babylon viz capabilities. If interested, here is the project:
https://plynett.github.io/ [click Run Example Simulation to get an idea]
I have brought Babylon into the project, and can render simple 2d textures in a WebGPUEngine, e.g.
var txState_image = new BABYLON.Texture("./renderedImage.jpg", scene);
customMaterial.setTexture("textureSampler", txState_image);
My issue is that I am unable to figure out a way to get WebGPU texture data to the Babylon engine without first bringing the WebGPU texture data back to the CPU. This is of course a huge performance bottleneck.
For example, I have the WebGPU texture “txWaveState” created with the function pasted below. In some perfect world I would be able to:
var txState_image = new BABYLON.Texture(txWaveState, scene);
- or -
customMaterial.setTexture(“textureSampler”, txWaveState); - or -
set a GPUTextureUsage flag to open the texture up to Babylon.
I will say that I am quite inexperienced when it comes to this type of programming config (I am an old school fortran guy), and so if I have missed something obvious or this is naive, please tell me what to read.
I probably need to stay in the native WebGPU texture / shader environment here, as in the end compute performance is more important than viz.
Thanks!
-Pat
export function create_2D_Texture(device, width, height) {
return device.createTexture({
size: [width, height, 1],
format: ‘rgba32float’,
usage: GPUTextureUsage.STORAGE_BINDING | GPUTextureUsage.COPY_SRC | GPUTextureUsage.COPY_DST | GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING
});
}