How to show a large forest of trees with almost 60 fps?

See my experiments to solve it:

  • clone to use is expectable slow - SceneOptimizer makes it as fast as merged meshes
  • createInstance is almost 10 times better. And LOD (LevelOfDetail) works fine too.
  • merging meshes is great! - but no LOD :frowning:
  • SPS is slightly faster compared to merging, and there is also no LOD
    See also numbers below in the code

My actual solution is, to group parts of the forest into merging meshes and use the LOD for each whole group.
I could do the same with SPS Is using some 100 SPS in parallel is OK?
Well, as SPS could have more visible trees, the size of my groups could be larger.

I would like to use just one SPS, but need LOD.
Is there a way to implement some behaviour like LOD inside a SPS?
As far as I know, it is possible to have one GPU code
run for all meshes. So I need a code, comparing the distance to the Camera
and change invisibility.

I just watched about Instance Buffers.
As I experienced, addLODLevel get lost with SPS.addShape
Could it be added by registerInstancedBuffer?

There’s no LOD as per se in the SPS.
That said you can have 2 (or more) sets of different particles : some with a high level of detail, some others with a lower level and recycle them according to their distance to the camera :
example 1 : set the more detailed ones as invisible beyond a given distance and the less detailed ones then visible at the same locations.
example 2 : use 2 SPS, one for the nearest and more detailed ones, another for the more distant and less detailed ones, just set the particle visible/invisible in function of the camera position.
example 3 : use an expandable SPS (new feature), to remove or add particles instead of recycling them, if the performance still fits your needs.

The very same approach with instances and different models of details should be as efficient.

The recycling should be quite an easy way in your case because the trees don’t move and their locations are known in advance. Setting objects as visible/invisible should be efficient, fast and light to implement imho.

I have seen particle birds moved by the GPU only. So I still hope, a code alike could invisible (or move the particle out of frustum) according to the distance.

What you mean is, to do the decision in the CPU? Could that work for really much particles?
Sure, the positions are known. There are ussually about 4 tiles about 3 bx 3 Kilometer visible, each with may be 40’000 positions!

I assume, the GPU and the CPU are running in parallel and a new cycle starts when both are done, right? How could I see wish one is the limit, the one needing more time?

you’re right : what I proposed was CPU side only.
And 40K positions is really a high number… but, a 40K static position set is the best candidate to use a Octree (or a Quadtree if considering only the positions on the ground) or a simple ground/space partitioning system.
So no need for testing every particle each frame, but only quads from the camera position and then change their status : invisible/visible. If the number still seems to big, you could also consider not doing this test each frame, but only on some of them and only on a camera movement.
Whatever the approach, I think that, if you have dozens hundreds similar objects to manage and all not visible at the same time in the screen, implementing something to recycle them could really be worth it.

1 Like

We are thinking in the same direction. No camera move, no check needed. Rotation: a certain angle only is needed. Moving closer/away: a bow around the camera has to be checked. Slide side-ward may be a combination.

Or, just checking the trees at the LOD-limit will be fast but needs a tricky code. You think, Quadtree could do it? Find all trees near the line, dividing near and far LOD? Any hint to code?
As i read about this in wikipedia, I was hinted to frustum culling. To me, an newbe in GL, this is something a GPU is made for.

Anyway, I would be happy, if BabylonJS would offer this feature one day.