After some great help from @Evgeni_Popov and @CodingCrusader in my hair simulation journey, I made something a bit too general for that thread but which might be useful to anyone else working with WebGPU and wants to calculate (or re-use) on GPU the final mesh vertex positions after bones and morph targets. The default vertex shader already does this, but there’s no way to re-use those transforms unfortunately because vertex shaders can’t really emit data.
My solution is a simple JS class (BoundingInfoAndTransformsHelper
) that extends Babylon’s BoundingInfoHelper
to not only return bounding box but also a WebGPU storage buffer containing all of the vertex positions. Because the points are processed in parallel the improvement can be more efficient than CPU side transforms, for very large meshes. More details about the upstream class here: New feature: BoundingInfoHelper - #4 by Deltakosh ; performance is the same as for getting the bounds (because the hard work of computing the positions is already done for the bounds calculation).
But a playground is worth a thousand words
I will also add ability to calculate normals soon, because I need that for my own projects.