For a very WIP underwater first-person-fishing game with a currently very hairless character, I wanted non-rigid hair.
But of course it will also be useful for various soft elements like floating seaweed, fish fins, kelp. So I wanted to have performant rope physics that I can slap onto Babylon meshes with some control over the physics settings.
So, down the rabbit hole with ChatGPT to learn some simulation math, then implement it in WGSL.
I am working in whatever midnight minutes I can spare (new dad) so it is not as polished as I would like, but I would love to get feedback on any part Especially some parts of the physics aren’t quite natural. I will add some questions for the matrix geniuses here when I have the energy.
The development is looking great! Nice improvements on the hair dynamics.The plant life will be a perfect addition to your game development. Maybe you can implement bubble plumes coming from the ocean floor and instance baby bottles as the particles!
It’s time for the best part! I am working to add hair to an animated character now. I’m using a Blender + Mixamo + GLSL export workflow as I’ve seen from examples.
Unfortunately the transform nodes + bone matrices are really doing my head in (it is a very weak head in the mathematical sense haha).
In too much detail (helps me understand what I’m trying to solve as I write it), here I’ll explain my mental model for the simulation interacting with Babylon scene:
World space:
the hair mesh has its parent set to the character mesh (or transform node); so all world transforms to the character applies to the hair too. BUT importantly we record the inverse delta of world space transform each iteration. (this is for preserving physical locality, see explanation at bottom).
Local space:
Each hair strands’ root point is pinned strictly to some vertex in the character mesh – calling it spawn vertex – it doesn’t follow physics. Other points down the strand, contrarily, should only move globally because of simulated physical constraints.
Root point behavior: statically track the spawn vertex
That means at the beginning of each sim iteration we set the local position of the hair root point to the local space position of its spawn vertex. So significant local space transforms (e.g. bones, morph targets) affect both the spawn vertex and root point equivalently.
Other points’ behavior: follow physical constraints
For non-root points in the strand, only simulated constraints should determine motion in world space. But there’s a gotcha – Because we’re using local space coordinates for the root points’ positions and using parenting for the global transforms, when we drag the character the hair would just move with it statically. That doesn’t respect physical notion of locality where the individual sections of hair each want to stay where they are until some external force tries to move them (e.g. tension upstream on the strand). So we need to do a trick here – for all non-root points, offset their positions against the mesh’s global transform (more specifically, offset by the inverse of the delta transform between the previous frame / iteration and current), causing them to appear suspended in space by resisting being moved by the parent mesh (until one-way constraints to the statically positioned root points cause them to be pulled towards the mesh’s new global position). This offsetting is done without impact to acceleration (energy shouldn’t be added because no movement occurs in global space).
To try to summarize this part, I wanna keep the simulation positions in local space (preserves numeric fidelity) but to have global motion affect the simulation accurately we need to offset all sim-controlled (e.g. non-root) hair points against any mesh transforms so they can organically, rather than statically, react to the transform.
What’s a viable approach to get the final post-transformation (e.g. after bones, morph targets) vertex positions / normals into my compute shader? I had the idea to just write the computed positions to a storage buffer from a material plugin custom vertex stage (playground of that humiliation); of course it’s not possible to write from a vertex shader (if one could only know everything beforehand)!
My next effort would probably be to port the transformations section of the (compiled) default vertex shader WGSL to a compute shader. But perhaps there’s a better solution?
If it would simplify the problem, for my own “character hair” use case I could decide by definition that only one bone has full weight over the relevant spawn vertices, and that could be the sole basis of local space transforms (no morph targets etc). Maybe in that case I could go with an approach like using { headBone }.getAbsoluteMatrix() as a uniform. But I wasn’t able to pull that off in this test playground yet, likely because I don’t understand some interactions between the mesh parenting and skinning.
Nice, thanks great simple POC and should solve my use case perfectly! I think CPU side is just fine for now actually, I will give your approach a try hopefully tonight.
I am still curious what a GPU-side solution would look like, especially for high-resolution meshes and with different types of local transforms I could imagine a performance difference getting more dramatic.
But it is good to be pragmatic as a solo dev, I am trying to embrace that
WIP, based on @CodingCrusader’s POC (thanks again!) I added ability to select spawn vertices based on a control mesh (for something that looks more like a hair pattern – previously I was just using a Y-clip value). I will integrate the positions into my simulation compute shader via storage buffers
I can notice a little bit of wobble in the computed positions relative to the source mesh, in both the original. Is this just numerical stability issues, or possibly the matrix data is one frame behind due to the timing of the updates?
How should I scale / rotate an imported complex mesh with skeletons / animations such that this playground (using particles to depict the transformed positions) would still work? (I’ve read a few docs on this but did not completely understand it yet).
My own hair sim generates a single mesh, so I think I can just make that mesh have the root transform node of the character as its parent, then it should follow any transforms, I guess. I’ll try that next
You can take inspiration from what we did for the ComputeBoundingHelper class, which had to do exactly that to calculate the right bounding box for a list of vertices: