So now I’ve got the basics down for actually using the engine, and i’m working on implementing some of the visualization strategy for my simulation. This is a general question about handling procedural world data and map building, so let me know if it’s not engine-specific enough and/or steer me towards an appropriate place to find the answer.
I’ve got code already that builds a large-ish (currently testing around 200x200, would like to support up to ~500x500) tilemap of interior space (rooms, walls, doors, hallways, etc), and i’m slowly building up low-poly 3d assets for furnishings and decorations. Ideally i’d like to use some LOD substitutions to allow continuous zoom from the full map size down to individual rooms (omitting details in favor of general floorplan model at distance zoom), although this has been a struggle in every visualization implementation i’ve tried (2d, 3d, whatever). My question is two parts:
i think i can build room meshes dynamically either using ground/plane, simple assets (e.g. doors, windows, etc) and mergemesh or by constructing vertices arrays from scratch, but i’m unclear on how to optimize multimaterial and uv mapping to skin the final room interior mesh… do all vertices needing a particular submaterial need to be grouped or can i add those material mappings on a face-by-face basis? these individual room meshes would then be added to the scene octree, and should (hopefully) allow the full map to be rendered at a distance, only rendering the few dozen room meshes themselves, instead of the many thousand individual ground/plane instances i’m toying with now. TiledGround seems out of the question as any large amount of subdivision (>~32) will freeze the system during generation, and while using an intermediate array of say 10x10 TiledGround might be workable i’d really rather have meshes-per-room for things like selection outlining if possible. is my plan a viable way to handle the data or are there smarter tactics others have used?
when using LOD, i’m guessing that objects with a null setting (to not render) at a certain distance still need to be checked every frame, which might become untenable with thousands of objects (furniture, decoration) at a distance view. i know there are renderGroupIds available, is there a switch to turn these groups on/off wholesale so that say an ‘object’ layer can be disabled at a certain zoom distance to avoid even looking through this list at all? and how would this affect a scene octree? can multiple octrees be used and again turned on/off in a single stroke?
These are some initial questions coming up for me as i try to plan out my visualization code; I’m sure i’m missing a million other questions i should be asking as well, and i’m sure i’ll have many more, so feel free to point me in another direction or share any indirectly related advice on the subject that might be relevant. I’m an old hand at programming in general and have played with graphics programming many times, but this is my first real foray into serious 3d engine work, so any and all advice is appreciated.
This allows you to depict a map (ground relief) and an object map (what objects are on this relief, where, how ?) and to show all this with only 2 draw calls by rendering only the parts around the camera actual position
example : Test Babylon SP Terrain
The map is 1000 x 1000 points (pre-computed from a perlin noise function)
The object map is almost 70 000 objects.
The terrain is only a 100x100 vertex mesh, updated from the camera position and the current map data.
The visible objects are a only pool of 6000 recycled solid particles.
Thanks @jerome – i briefly considered the dynamic terrain option; it’s definitely a neat tool but not a fit for what i’m doing currently, since one of my primary concerns is seeing the whole map at once and the map data is made of architectural building blocks rather than heightmapped terrain.
My current approach is manually building meshes for each room based on specs in the generated tilemap; i initially tried making a ton of ground and plane objects and stitching them together with mergemesh, but the overhead is unbearable due (surprisingly) to the dispose calls on the thousands of initially generated meshes during the merge. It’s been a good tour of the mesh generation process, and it seems to be working quickly and performing well at scale. I’ll revisit the LOD issues when i get there, and in the future i’ll work on asking more bite-sized and specific questions; i understand why it might be off-putting to try to tackle monolithic open ended questions.
I guess the next question i have is how easy/hard it is to stitch together asset vertex data to create new meshes. building floors and walls from simple quads is relatively easy, but doors and windows (anything with more complicated geometry or uv mapping) may be a little harder to do programmatically, so i’ll be looking at merging asset geometry into my generated mesh.
a simplified version of the open ended question would be more along the lines of ‘what are some best practices or other developer experiences working with modular architectural assets for procedural scene generation?’ particularly interested in performance with complex or large scenes built from building block assets at runtime. thanks!
I can’t speak to much of the 3D specific technique, such as LOD or how exactly to make the meshes, but I’ve made a fair amount of very large tile maps in 2D, and I think some of those techniques may help.
Tiles can be grouped into chunks, most easily square chunks. For example the first 10x10 (or much larger) area of tiles can be stored as a 2D or preferably 1D array and called a chunk. The chunks themselves can be grouped in a sparse collection like a Map or object. The whole thing can then be wrapped in an interface that allows manipulation of individual tiles with an api like world.getTile(x,y,z) and internally it converts these “world coordinates” to a chunk coordinates and tile coordinates and then returns the tile’s data. I can share pseudocode if desired, but I just wanted to roughly lay out that system before going on.
After there is a sparse collection of chunks, where each chunk is an array of tiles, it becomes reasonable to perform operations on a small subset of the whole. It becomes possible to have a practically infinitely large map but only load small sections of it.
Operations can also be performed on single chunks or single tiles, for example if building a mesh out of a full chunk takes 6 ms, we would lose a frame (@ 60fps) if we ever built more than 3 meshes at a time. But we could queue chunksToRebuild, chunksToLoad, chunksToReduceLOD, etc and process one of these per frame to prevent locking the main thread for longer than a frame.
As for what rate to merge meshes within a chunk, or how big a chunk should be, I’m not sure. In the end a low end machine isn’t going to be able to zoom out and render thousands of meshes. But a system like the above does offer the granularity to show smaller areas, merge small areas, and dispose of small areas while spreading operations out over multiple frames. It also allows for high speed selection of chunks or tiles in several scenarios, because one does not ever actually have to iterate through any objects to know which ones are near the camera/player – just dividing the camera position by the chunk size answers the question of which chunk it is in, even if that chunk isn’t loaded.
thanks for the feedback; i’ve got a system running now by building meshes on the fly that seems to be working pretty well. since it’s interior space, building meshes per ‘room’ seems to be a good division and allows me to ‘theme’ rooms with different materials. these work essentially like chunks but with variable size (although most are fairly reasonable, and i could probably subdivide large rooms if needed). using this pattern i’m getting the full map to render pretty handily even on mobile devices, although as things get more complex i’m sure i’ll run into some slowdown.