Hello! I need some expert advice, I’m new to the 3D world
We are implementing an online BIM model viewer; models consist of tens of thousands of mesh elements.
Requirements:
Select one element
Select several elements (with a marqee frame or via shift)
Select all elements
Change the color of the element or elements (to any color, with transparency)
Partially draw an element (for example, half a wall)
Hide element
In the early stages, I rendered the entire model with a single mesh, which provided high performance, but this leads to problems with transparency and very complex feature implementation, because all operations with individual elements occur at the geometry buffer level.
Due to the complexity, we decided to try to draw each element as a mesh, and instantiate identical meshes. This solved the problem with transparency but greatly reduced performance.
In this regard, I have a question.
Are there standard approaches for dealing with a large number of elements in a scene that can be interacted with?
At the moment, combining the opaque geometry into one mesh and rendering the transparent geometry separately seems to me to be the optimal solution.
To interact with elements, I will create new meshes, hiding their geometry from the merged one.
For example, to color 1000 elements red, I will create a mesh with the geometry of these 1000 elements and give it a color through a material.
In short, you’re going to have to re-model each BIM model to make the most of Babylonjs features and for it to render efficiently. Inefficient formats is why most BIM online viewers have server-based rendering. The biggest challenge is maintaining the model quality in design teams.
Unfortunately the construction industry and standards are well behind!
If you have full control over your model creation workflows and own it then I can help with consultation/software workflows to create that pathway to gltf BIM. Apologies if I’m coming across a bit negative but it’s the reality.
As an example scenario: if all the models are created in Revit, you can limit the object-based (parametric) modelling to family types only. Creating some guidance/rule sets to keep your output meshes “instance-able”. It really depends on the project. Some projects may not work at all in the web due to the nature of the detail (like MEP and furniture). More complex projects could work with a custom LOD system but it’s going to be a lot of work.
In case of BIM, a model can contain thousands of small meshes with difference transformation, but with few material, that fits the case of Multi-Draw Indirect, which spec has not finished for webgpu, but already for native platforms like opengl or vulkan, in some third-party benchmark, Multi-Draw Indirect can have the performance similar to single mesh.
But since it is not fully finished on web, WEBGL_multi_draw also worths consideration, there is a feature request for babylon.js, and three.js has already implemented it with BatchedMesh.
After the introduction of Compute Shader of webgpu, there is also an option to use Compute Shader rasterization instead of hardware ones, giving devs more fine-grained control on performance turning, and it shows even better performance than Multi-Draw Indirect.