WebGPU: Directly rendering to webgpu canvas context?

I’ve been looking at this boid example:

And what I’ve noticed is that the gpu frame time is lower than when rendering a single gltf object with animations disabled as seen here:

Also the total frame time in the boid example is significantly lower.

I’m still learning about webGPU and how babylon.js integrates it into the framework, but right now it seems like the boid example is using an arbitrary mesh as the target of both a webGPU compute and render shader that are responsible for all the rendering done in the scene. However, in the second playground, the single gltf model is rendered relying only on the standard methods via the babylon.js. webGPU engine. It feels like the boid example is taking a more direct path to rendering on the canvas webpu context hence the better frame times - is this correct or am I not understanding something?

EDIT: I’m doing these comparisons in chrome canary.

boids has 1 draw call and 1 material only in the scene, while the gltf model has 10 draw calls. gltf and pbr models or really any model tbh draws into multiple buffers that get merged together at the end. the gltf model has multiple submeshes , the head, collar, collar clasp, eyes, shirt and teeth are all separate. then it has metallic roughness, normal and occlusion maps. it also has a background and skybox material and env texture for the specular. all of the environment, maps, and lighting stuff dynamically contribute to the final look, so there’s lots of stuff going on.

also, this is a good example of whats on by default and when/how to turn off stuff. the snapshot rendering would be good. babylon overworks itself by default

3 Likes

Okay, that makes sense as far as the frame time differences. But let’s say I want to render a bunch of primitives to the canvas directly, say a field of grass where each blade is composed of 5 or 7 triangles, and I intend to manage the culling of the grass myself - what would be the best way to go about rendering the grass? Right now it seems like I want to use an arbitrary mesh with my own shader programs similar to what’s done in the boid example.

idk actually, lets brainstorm. what are the options first? consider tradeoffs after.

displacement of the normals on the mesh https://playground.babylonjs.com/#J9PV7T#1

sps morphing - shows how you can change colors and geometry. sps btw is 1 mesh, hence “solid” particle system
.https://playground.babylonjs.com/#1X7SUN#12

dynamic terrain - also integrates with sps. lots of good demos, i dont want to link them all here
.Search Page | Babylon.js Documentation
https://playground.babylonjs.com/#FJNR5#267

I’m thinking a simple solution could be a combination of the above examples, the morphing worm sps + the sps on the dynamic terrain and see how that goes. sps is quite versatile and can even integrate the nme.

sps tree generator
.https://playground.babylonjs.com/#1LXNS9#4

instanced VAT - i feel like no, but idk
.https://playground.babylonjs.com/#CP2RN9#20

of course, we should consider how the master has done it.
.GitHub - Popov72/OceanDemo: Ocean demo in WebGPU with Babylon.js

the new node material capabilities can be useful for this
.https://playground.babylonjs.com/#M3QR7E#34

i saw in the 6.0 thread, this demo has a wind effect, which is pretty cool. i think integrating trees / everything into one “VegetationMaterial” or something could be cool. there are actually other benefits of doing this too because with the webgpu engine we have snapshot rendering which is gonna be super useful for this. however, it doesnt work with dynamic shadows per-se. Even unreal has this limitation, BUT, I saw a youtube video explaining an unreal plugin that overcomes this for tree shadows, where it basically creates a second layer with simplified geometry of the trees to render the shadows from the invisible tree.
.https://bibleadventure.com/

btw the dynamic lod of the dynamic terrain is quite nice. i think essential, even. either by using dynamic terrain or re-implementing it for whatever reason.

4 Likes

Very cool, thanks for all the info!

i saw in the 6.0 thread, this demo has a wind effect, which is pretty cool. i think integrating trees / everything into one “VegetationMaterial” or something could be cool. there are actually other benefits of doing this too because with the webgpu engine we have snapshot rendering which is gonna be super useful for this.

I think a compute shader could handle animating and computing shadows for the moving vegetation, but I’ll have to read more on this!

Even unreal has this limitation, BUT, I saw a youtube video explaining an unreal plugin that overcomes this for tree shadows, where it basically creates a second layer with simplified geometry of the trees to render the shadows from the invisible tree.

Yeah that makes sense, definitely going to read more about this plugin. I think using simplified geometry is the way to go for environmental shadows as well as handling physics.