Ask for advice on how to build an efficient zone for my game

Hi!
As some of You might know for some time I’m working on 3D mmorpg game with 2D movement (like eg diablo). I dont have much experience with 3D graphics and I’d like to ask more experienced and smarter People than me for advice.

My current idea is to separate zone into 32x32 segments. Initially map will be 16x16 ( = 256 segments in total) segments with player range of visibility of 2 sectors. I builded prototype in the blender and it works nicely. As a root for each segment i used empty mesh which after convertion to glb is turning into transform node. Each sector consist of 3 planes:

  • Ground - invisible used for picking world position needed for movement and spells etc.
  • Collision grid - I was thinking to use planes with 1x1 cells to keep collisions data. eventually it will be disposed from loaded map or digested and removed at former steps
  • map - plane with other meshes as a childs.

Why i have map and ground plane? From my undestanding just simply putting models on the map will cause performance overhead with increased draw calls I wanted keep ‘ground’ intact and merge ‘zone’ with all other meshes which are places on top of it. But not sure if its feasible to perform such merge. I think I cannot just merge meshes with different materials, possible way it to merge on UV’s lvl? I dont understand it yet.

So the question is: is it valid approach?, what are my alternatives? How would You approach this problem?

screenshot of inspector from current prototype:

Let me add @Cedric who might be able to help with this

Sorry but I’m not sure to understand. What do you want to achieve? What is moving? How does the user interact with things?

Hey @Maiu I am a bit uncertain how you differentiate “zone”, “segment” and “sector”. Is there like one big map or game world (you made in Blender) and this turned out to be too heavy on fps? And now you want to chunk this map up in order to increase fps?

I also do not quite understand the relations between “ground”, “collision grid” and “map”. These are planes? Should they help increase fps?

Anyway, what strikes me already, what is your camera perspective? Exactly like in the screenshot (particularly this distance)? If, as an alternative, you look more top-down’ish (like Diablo), you can decrease the render distance, to increase fps. Would that be an option?

About merging meshes with different materials. Do meshes have painted textures? Then one way is using texture atlases and UVs (as you suggested). It gets easier if you have palette textures. Then you can just merge palettes as well.

However, when you say merge. You mean like the meshes that enter and leave the camera? This might be heavy for the render loop. Probably need to prototype and measure it :frowning:

Hi @Joe_Kerr

Hey @Maiu I am a bit uncertain how you differentiate “zone”, “segment” and “sector”. Is there like one big map or game world (you made in Blender) and this turned out to be too heavy on fps? And now you want to chunk this map up in order to increase fps?

Imagine square 512x512 units (meters) It will be the ‘zone’ then I plan to split it into segments/sectors with size 32x32m. 512 / 32 = 16 So in will be 16x16 segments/sectors in this zone.

I also do not quite understand the relations between “ground”, “collision grid” and “map”. These are planes? Should they help increase fps?

Yes these are planes, all of them will have size 32x32.
Ground will have minimum number of vertices (4). It will be used for only for picking ray. eg to point location where player want to go or cast spell. This way I’ll not need to pick through all environment elements - I’ll be able to set them as isPickable = false.

Collision grid it’s just a metadata structure used to print collision map as grid in the blender. And then turning it into some data structure. It should not be considered as renderable, it will not exist while playing game. only collisions data in some sort of data structure eg. table.

Anyway, what strikes me already, what is your camera perspective? Exactly like in the screenshot (particularly this distance)? If, as an alternative, you look more top-down’ish (like Diablo), you can decrease the render distance, to increase fps. Would that be an option?

This screenshot it’s taken from sandbox.babylon tool. I wanted to show current structure in the inspector. In the game camera radius is much lower.

About merging meshes with different materials. Do meshes have painted textures? Then one way is using texture atlases and UVs (as you suggested). It gets easier if you have palette textures. Then you can just merge palettes as well.

I think they will have painted textures, hard to answer right now, still learning.
Ideally I want to merge all environmental elements in the segments into single mesh. Then drawing one sector (static environment) will cost only 1 draw call and some processing of ‘ground’ but it will be in another camera layer so no draw cost.

However, when you say merge. You mean like the meshes that enter and leave the camera? This might be heavy for the render loop. Probably need to prototype and measure it

merge while preparing zone map. eg before exporting from blender.

This is recording of my prototype. Player see segments of the current location plus all other sectors in radius 2. When moving and changing segment some segments are beeing disabled and other enabled:

I’d like to avoid having multiple draw calls from map segments (only statics environment stuff) and this way increase performance.

Player is moving within world zone. Zone is segmented into 32x32 chunks. Player see only environment in his chunk and chunks withing radius of 2. So in this example 5x5 segments = 160x160 area in total.
All other monsters, players, interactive elements are consider as dynamic and wont me merged. Their visibility is managed by the game engine. Server sends to the players only data from player neighbourhood.

You may have a look how it is done here - GitHub - fenomas/noa: Experimental voxel game engine.

1 Like

I’d use thin instances. There would be as many draw calls as different meshes but it’s easier to update and render. And smaller memory footprint as well.

I am wondering if merging is necessary at all. Have you actually benchmarked a representative sample scene? I would try regular instances first (which will save a lot of time in terms of asset preparation and coding).

Keep in mind that merging meshes is restricting, if not prohibiting, LODs.

Also, if merged meshes are often only visbile partly, then non-merged meshes might be better for performance if Babylon can cull the non-visible ones.

Finally, my understanding is that you build the entire game world in Blender. And directly in Blender you also want to merge meshes per sector. So I am guessing there will be a chance that particular meshes may end up in different sectors (like clutter or nature stuff). This would waste vertices.

FYI: I do not want to convince you to not merging meshes or anything. Just adding pros/cons for your decision making!

Just an additional thought: Instead of enable/disable I would try if I could stream in meshes. Will probably decrease loading times significantly.

This is an important factor. If you have a super wide cam perspective (almost like first person), you likely will need different optimisations; compared to e.g. like a classical iso rpg perspective.

1 Like

Thanks! And this is what I need :hearts:
I’d dont have any experience and I’d like to avoid reworking everything again and again. It doesn’t need to be perfect but work in acceptable manner.

Keep in mind that merging meshes is restricting, if not prohibiting, LODs.

I’m not sure if I’ll need LOD at all. All stuff will be low poly. Game entities visibility is managed by the engine and environment will be by this segmentation idea. Visibility range wont be too big, I can decrease it a little bit more base on the allowed camera settings.

Just an additional thought: Instead of enable/disable I would try if I could stream in meshes. Will probably decrease loading times significantly.

Can You elaborate it a little bit? I’m not sure if I understand it fully and it sounds like good idea

Yes, indeed, good point. But keep in mind there can be a “dont-render-mesh” LOD level and mesh imposter sprites. But as you say may just not be worth it if it saves like 10 polygons :face_with_raised_eyebrow:

Do you load all your meshes at game start? Because this is probably taking some seconds (if not, then nevermind). So you could lazy load (from filesystem) meshes on demand that come or will come into view. And the other way, if meshes leave the player view, you could dispose them to free memory.

I should probably point out that I have yet to deliver a finished game. So much for experience :grin:

1 Like

at the start I’m loading meshes but to the asset containers and when needed I create instances in async but sequential manner. Similar is disposing but dont have async sequential tool to this yet.

my 2 cents.

I use a similar approach with dividing the game’s world into zones.

But I have a single terrain mesh. With top down camera you have a very limited camera frustum so even with 30k verts it is ok to send it to GPU and a lot of vertices will be discarded. Especially with a simple shaders(I think you don’t need PBR, right?)

Also, ~200DCs are not really big number for modern mobiles/PCs. So be careful with premature optimisation.

For static content like env objects, buildings etc you can probably use “texture atlas”. In my case I use Unity and Unity’s API to bake it(so multiple textures/materials will be merged into a single piece, but you need to update meshes’ UVs attribute). The tricky part here is that if you want to support repeating(when UV is out of 0-1 range) you need to implement it on shader by yourself.

I also added my own collision detection system because in my game a 2D plane is enough and I don’t need to download a full physics engine like Ammo etc.(~1mb, that’s a lot!).

3 Likes

Yes i dont need.

But I have a single terrain mesh. With top down camera you have a very limited camera frustum so even with 30k verts it is ok to send it to GPU and a lot of vertices will be discarded

I did new map prototype. 16x16 = 256 segments with 24x24 size (decreased it after some camera radius testing). I added bunch of trees, crypts and some monsters

this is from inspector:


Looks like it counts vertices also from disabled sectors. But it’s a little bit more than 30k :smiley:
currently each sector plane have 625 vertices so 160k (16x16x626) of them are from it.

I think you should test terrain + static objects only without monsters. Monsters and Characters are a separate problem with a different solution.

But even now, you have only 556k faces in total, so something like culling should be enough.

The most inefficient things are meshes selection and animations.

This is my stats ( https://poki.com/en/g/kingdom-heroes ):

The only thing I want to implement/improve is merging static objects(like props and buildings) to a single mesh to reduce DC and webgl commands count.

4 Likes

Thanks!

I think you should test terrain + static objects only without monsters. Monsters and Characters are a separate problem with a different solution.

I believe problem with the mesh selection and animation is because that I still dont have VAT. I tried to reduce it without but failed. Not sure why mesh selection is so high event with alwaysAsActive.

Do You have VAT in Your game? and how do You have total vertices = 0? :smiley:

I use ScenePerformancePriority.Aggressive mode so I think it influences total vertices somehow

1 Like

To summarize my thoughts and ideas. Perhaps it will help someone.

I’ll stick to the world segmentation and I won’t over complicate it with merging everything with the ground. I’ll have set of meshes dedicated for zones like: trees, rocks, few types of monsters etc. I’ll try to merge only some parts of the environment together if it will make sense (eg merging bunch of rocks etc.) I’ll try use sprites for eg grass.

I did next performance test so I added bunch of monsters but with VAT (still didnt manage to finish it fully) and performance looks really good.
Also i rewritten all GUI elements that are visible by default from babylonjs to html and it also gave huuuge performance boost.

On the screen some environment elements and bunch of monsters (333 in total - some of them are out of camera but still active and loaded to the scene).
Still there’s a little time left in the frame (but need also to count time spend to managing moving monsters etc. cause right now they are static so i dont need to interpolate positions and handle a lot of position update messages).

I almost forgotten. My machine is AMD 4500u with dedicated GPU so nothing powerful.

4 Likes