Nice!
How are you determining the visible objects? I found with so many without an octree it was too slow.
Do you happen to have source available?
It’s looking very very similar to what I was trying to achieve!
Nice!
How are you determining the visible objects? I found with so many without an octree it was too slow.
Do you happen to have source available?
It’s looking very very similar to what I was trying to achieve!
In this example, I use the DynamicTerrain extension with its embedded SPS to manage objects from a map. So the recycling is “automatic” (done by the extension).
The visible objects are the ones that are on the map section rendered by the current terrain. It’s only 2D then, so easier to do.
Doc : Extensions/dynamicTerrainDocumentation.md at master · BabylonJS/Extensions · GitHub
The recycling code is here : Extensions/babylon.dynamicTerrain.ts at master · BabylonJS/Extensions · GitHub
In your case, the implementation would be probably require to parse every object location in a first pass, then to store these locations in a tri-dimensional array (or a flat array with a fast access to each dimension) depicting the space in cubic chuncks. Then you can quickly know what chuncks are in the camera frustum and then render only the particle from these chuncks.
This is how I would do.
This is how I did with the mesh partitioning for the FacetData feature : Babylon.js Documentation
It would be the same approach, but for the whole space instead of a single mesh. It’s fast and quite GC friendly because only some little memory allocated/freed when the things move.
Babylon.js/babylon.abstractMesh.ts at master · BabylonJS/Babylon.js · GitHub
I had the idea ot generalize the SPS 2D automatic recycling in 3D, but as it wasn’t a identified need (neither for me nor the community), I didn’t implement it so far. Not even sure that people except me are using the DynamicTerrain so far
Maybe I’ll do it (3D automatic recycling) in 2019, but my todo list is quite long yet.
another simpler idea : if you don’t want to do any frustum calculation, you could just consider a big cubic space around the camera and recycle all the particles present in this cube.
If your camera has some movement constraint (say : it can’t rotate, or move in some direction) then it could also be easier to choose a simpler recycling algorithm.
Thinking back to your issue, I believe that the easiest thing (and the most performant one) to do would be to simply consider a logical cube moving with the camera in a partitioned space along World X, Y, Z axis.
When the logical cube intersects one of space cubic parts holding some objects, then render them with some recycled solid particles. It’s simple to implement and it’s fast as it’s simply a AABB intersection to check.
When the camera rotates, nothing more is to be done because everything is already rendered in the cube centered on it.
This should be a good start and should answer your need.
To improve the perfs later, I’ll implement a way to check if a particle is (or not) in the camera frustum, so only the particles within the logical cube AND in the camera frustum would be rendered. In other words, less particles to render to simulate more objects in the world.
I’m adding right now this feature (solidParticle.isInFrustum(frustumPlanes)) in my 2019 todo list.
This could also enhance the DynamicTerrain.
I really like these new forums. I was writing a reply, the computer crashed. Reboot it, come back and it’s saved my draft!
I haven’t added the SPS in yet, but I’ve started recycling the instances and it is smooth now. I think with the SPS I could really get the object count up a lot higher (and a few other optimisations)
You can see the culling on the edges in the little video here. It’s using an octree for culling (in the map implementation). So I think I can get away without any culling on the SPS. I just only pull in objects from the map that are in the camera frustum (and a max distance).
I know it looks like voxels, but it is a bit different. I’m just trying to get started and my art skills aren’t the greatest