According to my understanding, compared to regular instanced meshes, Thin Instances store all instance data in several Float32Array
buffers, which reduces overhead on the JavaScript side and cost of loop of all mesh objects. Is this correct?
However, the official documentation mentions two drawbacks of Thin Instances:
-
The “all or nothing” rendering mechanism: either all are shown or all are hidden.
-
High cost for adding/removing: adding or removing a thin instance is much more expensive than with
InstancedMesh
.
The “all or nothing” rendering mechanism raises a concern for me: Are these thin instances get frustum culled? If so, how? From the source code of _evaluateActiveMeshes
, it seems if they jump the function there should not be frustum culling. There are also a forum post mentioning “You should trade more work on the GPU side (because some meshes may be sent to the GPU that would have been culled earlier) than on the CPU in that case”. But I also found a 2 years later answer saying “By default, thin instances are frustum culled”.
Does this mean that the “all or nothing” mechanism refers to all thin instances being submitted to the GPU (logically all visible), and then frustum culling is performed on the GPU side?
The other drawback is the high cost of adding/removing thin instances. I understand the cost of removal: since the buffer is continuous, removing a thin instance in the middle requires moving all subsequent data, so we may need to rebuild the buffers each time. But for adding, can’t we just append to the end of Float32Array
instead of rebuilding? We can even pre-allocate a buffer larger than needed, as the documentation suggests.
Or does the “high cost” refer not to the JavaScript side, but rather to the internal operations of thinInstanceSetBuffer
(maybe recreate and copy?) and the cost of uploading to the GPU?
If anyone can clarify these points (in-depth explanation from the engine’s perspective is better!), it would be greatly appreciated!