I have thought about this before. Read the Doc just now though, so let me just jump in totally unprepared, as usual. I wonder that unless the logic to decide which version to use is done in the vertex shader, would not adding much too the UI cpu thread kind of defeat the purpose. Amdahl’s law clearly shows that adding very little to a single controlling thread can have an outsized reduction to throughput.
I see the purpose of LOD to be purely frame rate related, not any other things like “looking good, or good enough for conditions”. The reason is if the prior render time was within 0.0167 second (60 fps), or 0.011 seconds (90 fps), why reduce at all no matter how far away it was, or whatever? See the ignorance showing yet ?
I am also wondering that if LOD is expressable in JSON / .babylon, are there any requirements or things that cannot be done at the same time? Like:
- skeleton animation
Blender does have a development process where you can literally start off with a cube, and then add more and more levels of detail. I have never even had it come up here, and think nobody uses it.
It also has a “Limited Dissolve” operation that can be done to go the opposite way. I might be able to have the exporter generate low res version(s). It would probably destroy the scene in the process, but I could just delete everything, so someone would not to re-save the .blend file after exporting.
I actually see the ability to easily get the versions in the first place the bigger problem.