How to plan an art performance budget for web games

Hi there,

We have been using Babylon for about a year now and we’re very happy with it, and we’re moving from prototyping to production soon. In doing so, we’re now starting to create much higher fidelity art, and we’re wondering if the community can share what they’ve learned about setting runtime art budgets for the web.

Here is a little preview of what we’re working on: a game that lets user assemble and play with fun little robot agents (we call them Mels). After assembling your Mel, it comes to life using our proprietary real-time reinforcement learning tech and physics engine:

Until now, we’ve been focusing on the physics and inference performance, which is top priority. We’re keeping our scenes relatively simple so that the physics engine runs as smoothly as possible.

Several of us are console gaming veterans, so we’re well-versed in how to think about art budgets, but we don’t know what we don’t know about the things we’re going to encounter as we scale up the volume and fidelity of content in a web browser, in a game engine that is still pretty new to us.

Our scenes will be simple in terms of the overall number of meshes, particle emitters, etc, but we’d like to design around what we know will perform well before we finalize art direction and the complexity of our scenes.

Anyone have any words of wisdom about polycounts, overall texture usage, max texture sizes, RAM usage, and asset download times? Assume that we want to support:

  • A large base of smartphones and tablets
  • A large base of Chromebooks
  • Laptops with integrated GPUs
  • And powerful desktops as well…

Thanks all, we really appreciate all the help you’ve given.

3 Likes

Congrats on the move to production!
Maybe @PatrickRyan has some resources for web art assets budget.

@HowdyLuda, like @Cedric mentioned, congrats on moving to production! In terms of art budgets for the scene, you could take a look at the Khronos asset creation guidelines which is specifically aimed at creating assets for e-commerce using glTF. Their triangle budgets will be high for what you want, but it is a good primer for thinking of your assets when designing for glTF. I am assuming you are using glTF for your runtime format, but if you are targeting Babylon files, much of this resource will apply. There is also the primer that we made for considerations in your art pipeline. The rule of thumb I usually use is my triangle budget is spent to keep silhouette on my asset, and I try to keep hero assets in the 15k-25k range. Lower is better. This would be for a scene that is fairly complex with a lot of assets.

In terms the biggest things to keep aware of, these would be the top of my list:

  • File size for assets is key because they need to be downloaded every time you load your game. Unlike consoles where you can download a binary of assets to the local drive, everything in your scene will be downloaded every time a user visits. Cache can help here, but you will have to expect that your assets will be downloaded multiple times for a user due to clearing cache. Keeping them small and downloading only the needed assets at first and asynchronously downloading others as they are needed is key. And remember that users on smartphones will often not be on wifi, so you could be needing to use slow networks for download.
  • The web is single-threaded. We have to do everything in a single thread in our scene. You can use web workers to offload some things, but if you are using physics you are doing sometimes heavy computations per frame. Anything else that also has to be done per frame is going to take cycles from that. Things like dynamic shadows, shader FX, particle systems, etc. all have to be balanced to use as little CPU time as possible.
  • Keep your draw calls down to the smallest you can. Split your materials only when needed such as when you need a different material type such as alpha versus opaque. Merge every mesh you can. Every mesh and material is another draw call, so make sure that you only use what you need.
  • Dispose of what you don’t need in your scene. Every glTF will load with a __root__ node and whatever materials are in the file if a mesh is present. There are a lot of times when you may want to load a mesh and assign a material created in scene at runtime. Make sure you dispose of the materials loaded with the mesh that aren’t used and if you are moving meshes to another hierarchy - such as with an attachment - move only the mesh and dispose of the __root__ node. The __root__ node is there to correct for the handedness conversion between glTF and Babylon. If you create your scene as a right-handed scene, the __root__ node is not created, but Babylon is inherently left-handed, so there may be other things you run into converting the scene to right-handed that aren’t apparent from the start.
  • Textures are always the heaviest part of an asset. We typically limit our texture sizes to what we need based on how close the camera gets to an asset. If it can get right on top of the asset, 1k-2k textures may be needed, but if not, keep them as small as possible to save on download time and size in memory.
  • Channel pack textures where possible. If you have textures that can be channel packed (like ambient occlusion, roughness, and metallic) do that when possible. It saves on the number of files to download as well as saving the number of textures to keep in memory.
  • Texture and mesh compression (KTX and Draco) can help with size in memory, but the pipelines to create these compressed assets aren’t super common.
  • WebGL vs WebGL2 is something to keep in mind. Babylon will handle falling back to WebGL1 when a client does not support WebGL2, but there are some features you may not want to use such as GPU Particles if you know some of your user base will be using browsers that don’t support WebGL2.
  • UI for your experience will be more akin to the pitfalls of CSS as your canvas is a DOM element and you will need to treat the experience like you do with any website where your window can be resized at any moment or presented on both landscape and portrait orientation screens. Our GUI system is loosely based on CSS, but you may even need to split your UI between the DOM and the canvas. We have accessibility features for Babylon in the canvas where you can tag nodes in the scene with Aria tags, but how you create the experience with accessibility in mind may lead you down one path or another. Most browser experiences come with accessibility options like tab lists, but if you are keeping your UI inside the canvas, you need to manage focus because if the canvas isn’t focused, tabbing will tab through DOM elements. Testing out scenarios like this will help you decide how to manage your graphics for the UI.

These are just a few ideas to keep in mind as you start looking at your art pipeline. Feel free to ping with more specific questions if you have them.

5 Likes

Hey, thank you so much for this! All of those optimizations are great to call out. Crunching file sizes down as small as possible is the real challenge here, for sure.

I’m looking forward to some day having KTX compression handled in the Blender GLTF exporter. For us, manually mapping in code each texture to its according material isn’t really viable, it would create too much work for the team.

1 Like