Levels of Detail (LOD) support for scaled meshes

I have a project in which meshes are dynamically scaled to small sizes. This seems as a perfect use case for applying different LOD to save on details when objects are very small, but then I realized that Babylonjs LOD does not cover such a scenario.

Please have a look at:
https://playground.babylonjs.com/#QEMSV1

The lower mesh is of original size but distant. In such case LOD kicks in and applies a different mesh (pink)

The upper mesh is close to camera but scaled with factor 0.1. Despite being visually smaller, Babylon.js renders the mesh with original level of detail (brown mesh).

I would expect this to behave differently, and hence my feature request.

On a side note, IMHO addLODLevel() would be more intuitive if it would use scale parameter instead of distance. As for backward compatibility reasons it is probably not possible, how about adding addLODLevelAtScale().

This way, additionally to what we have now:
mesh.addLODLevel(10, mesh1) - apply mesh1 if mesh is further than 10

we would be able to say:
mesh.addLODLevelAtScale(.5, mesh1) - apply mesh1 if mesh is appears smaller than half of screen size

This is mostly because we only support LOD from distance to camera.

I would love to add support for LOD based on coverage:
Add screen coverage LOD support · Issue #5738 · BabylonJS/Babylon.js (github.com)

If this is you are interested in helping with please tell me

2 Likes

@Deltakosh thank you for invitation, but it is bit over my head. I am a backend engineer who just started playing with computer graphics and my programming skills in this domain are currently inversely proportional to the amount of feature requests I can come up with.

To give an example, I just spend an hour trying to find out how to implement WebGL queries according to your advice:

to be performant we need to use weblg queries that will give us the screen coverage of a mesh. Then we can convert that number of pixels to a threshold like a ratio to the overall number of pixels

and failed.

Nevertheless it is nice to see that I arrived to the same conclusion as people before me. This confirms that my line of thought is right and a more advanced LoD mechanism could highly improve the performance of BabylonJS.

As a user, I can add from myself that whoever dares to reimplement BabylonJS’ LoD mechanism, should make it pluggable, so that in the future not only screen coverage would trigger LoD, but also other metrics. I can think of:

  • distance from the screen center (the closer to screen edges, the less detail)
  • exposure to light (the darker, the less detail)
  • frame rate (the lower fps, the less detail)

I hope it doesn’t sound naive.

It does not!
Feel free to reply to the issue on github to bring your thoughts so we can be sure it will be considered when we will work on it

1 Like

I have thought about this before. Read the Doc just now though, so let me just jump in totally unprepared, as usual. I wonder that unless the logic to decide which version to use is done in the vertex shader, would not adding much too the UI cpu thread kind of defeat the purpose. Amdahl’s law clearly shows that adding very little to a single controlling thread can have an outsized reduction to throughput.

I see the purpose of LOD to be purely frame rate related, not any other things like “looking good, or good enough for conditions”. The reason is if the prior render time was within 0.0167 second (60 fps), or 0.011 seconds (90 fps), why reduce at all no matter how far away it was, or whatever? See the ignorance showing yet :grin:?


I am also wondering that if LOD is expressable in JSON / .babylon, are there any requirements or things that cannot be done at the same time? Like:

  • Morphing
  • skeleton animation

Blender does have a development process where you can literally start off with a cube, and then add more and more levels of detail. I have never even had it come up here, and think nobody uses it.

It also has a “Limited Dissolve” operation that can be done to go the opposite way. I might be able to have the exporter generate low res version(s). It would probably destroy the scene in the process, but I could just delete everything, so someone would not to re-save the .blend file after exporting.

I actually see the ability to easily get the versions in the first place the bigger problem.