At the moment, I have a lot of grids and draws, and I will probably add more later. Because of the single-threaded nature of browsers, my performance bottleneck at the moment is the CPU, and I can confirm this with chrome Performance.
Description of scene model: The main models of the scene have a large world scope (terrain, railway, viaduct) and a relatively large number of faces, such models are static models. There are also some dynamic models that are added based on business data, and the number of dynamic models is relatively small. I used ThinInstance as much as possible for the objects in the scene
I’ve tested every scene optimization I can do (except for snapshot mode), and it’s not particularly good.
Because the scene is in God View open mode, all objects are rendered. Here are some of my new thoughts:
- Use progressive loading (offloadable, reduced memory footprint), and make loading (offloading) invisible, as you can’t get stuck in the middle of a user action. Testing known.babylon progressive loading does not support unloading and does not work with thin instance objects.
- OffscreenCanvas was used because the project still has complex HTML logic, and it might be more efficient to separate the rendering from the HTML. I haven’t tried this part yet, but the problem I have tested is that there is a delay in passing interactive events into the WebWorker.
- We load a copy of the current scene in the WebWorker and manually control how objects are shown and hidden based on the location of the scene camera. However, since the objects with a large number of faces in the scene are large world scale objects, the effect of this method may not be good. At present, this idea is used to realize the hiding function when GUI.image is occluded by mesh.
- Since the inside of the scene is divided into regions, connecting these regions are railways and viaducts. Is it possible to separate the large world objects into these regions one by one during the modeling phase, and then dynamically show and hide objects in the scene based on the camera position? Because I observed that the drawCalls count does not increase when the object is hidden. This seems to be the same idea as progressive loading
Now I want to solve the problem fundamentally, so I think that rendering efficiency should be taken into account from the beginning of modeling. According to the display objects required by the project, the scene should be segmented reasonably, so as to reduce the number of objects displayed at the same time in the scene and reduce the number of rendering times. It looks a bit like the 3dtile specification. Do I need to convert large world range static objects in the world into 3Dtiles and render them using loaders.gl?
Do you have any good suggestions?
I guess you already know this documentation page : Optimizing Your Scene | Babylon.js Documentation
Yes, I read the page carefully and applied the optimizations I could. At present, the main problem is that the number of models in my scene leads to too many drawCalls, which causes performance problems on the CPU side and leads to my GPU resources not being fully utilized. So I had to find a way to reduce the number of models shown at the same time. Do you have any experience or workflow for developing and rendering large world scale scenes?
Since the interior of the scene is divided into multiple regions, connecting these regions are railways and viaducts. Currently, the models are loaded and displayed uniformly, and the models are instantiated. Therefore, I wonder if it is possible to divide these models according to regions in the modeling phase, so that they can be loaded by means of incremental loading, and control the hiding of distant regions to reduce drawCalls
Is it possible to optimize the geometry further or you really need so much polygons?
I considered this approach, but there is an instantiated bridge in the scenario that covers the entire world. LOD operations can only be performed on area in this case.
And using LOD requires a manual addition, because GLTF doesn’t seem to support LOD extensions yet.
The length of this bridge is 84 km, which corresponds to the case where 1 unit in the scene is 1 meter, and the range of the scene is 84000
The geometry has been subjected to face reduction operations, and Draco compression. I need so many meshes, but I don’t need to show so many meshes at the same time, because the scene is world-wide so I don’t need to show distant meshes. When the camera is close, you can show it slowly
First you may try to simpify your meshes for auto-LOD - Simplifying Meshes With Auto-LOD | Babylon.js Documentation
If results will suit you (in terms of performance) you may want to preprocess those meshes and load all this stuff during the loading, to avoid long simplification time.
If possible can you show us screenshot of the real scene?
Chunk it up?
This is only one segment, there are nearly 80 kilometers of Bridges in the direction of the red arrow, and there is a region for each distance. @Joe_Kerr
auto-LOD should be a time-consuming operation, I feel like I could use it on top of my large world split load, but if I were to use it as it is, I feel like the initial page load time would be much longer, which is currently around 1 minute
If auto-LOD will work for you you may later use preprocessed meshes for LOD system instead of doing simplification each time.
How to preprocess the mesh, using blender babylon export plugin? Or give me a document
Is Gausian Splatting an option?
Thank you for letting me know the concept of Gausian Splatting, but I don’t know how to integrate it with my scene.
You’ll find here some answers
You have 7x more draw calls than meshes. Are you using a (or several) cascaded shadow map(s)? If yes, you should limit the list of shadow casters to meshes closest to the camera, to avoid having all 500 meshes being processed (see Cascaded Shadow Maps | Babylon.js Documentation).
Also, showing the “Frame steps duration” statistics would help understand where the bottleneck is:
To optimize meshes geometry one may use Command-line quickstart | glTF Transform
Popov’s reply may be the main optimization you can do to lower the DC. It is also possible to merge and remove redundant materials. I don’t think it should be higher than 50 different materials.