IIUC NeRF has the promise of real-time rendering of complex assets by taking a few pictures. This is great since more folks can create 3D models (volumetric representation and not mesh representation). For some use cases, it is ok to not have the mesh in babylon and have a data structure which replaces the mesh and displays output of the SNeRG nerf model.
I might be quite a ways off in my understanding of how this tech works, but here are questions I have:
- Can babylon js support adding something like SNeRG object to a scene which is essentially baked nerf model where the camera position is passed to the nerf model and the model outputs the light reflected by the object?
- Will the sizes of these models be lot less than supercompressed KTX2 both for transport and size on GPU aka memory utilization?
Why do this?
- Simple model creation - photogrammetry can be a pain and 3D objects are non-trivial to create.
- Perhaps smaller assets (don’t know for sure yet)
See link for more details