Ok, I have an idea that I wanted to kinda discuss the plausibility of before I try to jump into it to hard.
So I recently saw the concept video game Marble Marcher by CodeParade which has the graphics being completely raymarched and the collisions still handled traditionally on the CPU by having parallel ray marching algorithm running that is being fed the same equation as the shader that is rendering it.
Pretty dope idea, but it had me thinking.
What if I adapted this whole idea: Billboard RayMarch Plane - Questions & Answers - HTML5 Game Devs Forum and made it so there was a physcial box mesh on the scene (or any other basic shape) that has a raymarching shader on it to make it look like another object so lets say a sphere.
Now while this box is using the shader to make the sphere and there is no physical mesh to collide against other than the box itself which will not make sense because I mean duh, box collider on a sphere but what if while the scene is compiling and running in the background asynchronisly I start subdividing the box with an algorithm that will produce the same output for the most part as the GPU ray marching but with a dumbed down, throttled and limited to a certain number of intervals before its considered complete. Then have that CPU algo subdivide the physical mesh and displace its vertices until it becomes a close approximation of the GPU raymarched object? I figured you would use the physical mesh as a projection cage so each object would then have its own coordinate space and compilation that would be needed, but it could be requested at the same interval as a LOD transition.
This is a really wishy washy concept but Im hoping someone might get what Im laying down. Don’t know what the benefit of it would be but it does sound like an interesting piece of tech.