Draft: Atlas Material

Summary

Make a Atlas Material to allow rendering meshes with different material but with the same shading model (Standard, PBR) to be merged or rendered in the same bind.

Motivation

Rendering on web suffers from performance issues, there has been disucssions for years about reducing “draw calls”, which is draw commands sent from cpu to gpu to draw a mesh, and considered the major bottleneck towards high FPS on web.
But recent benchmarks show that binding materials can take considerable amount of time, even more than draw calls.



On this sample with the NodePerformanceTest.glb on sandbox, bindForSubMesh takes half the CPU time, while drawElements, the actual draw call, takes 0.8% the CPU time.
Tools like glTF-Transform can detect and merge materials with exactly the same props, and merge materials with baseColor only.
But merging materials isn’t always the option, since materials can have different prop, or even textures, the easist way to merge masterials with textures is Texture atlas, but this can not handle meshs with uvs out of the 0~1 range with customized wrapping.
Also, making texture atlas in models can lead to uv precision loss, especially for quantized attributes.

Goals

Make a material that can be constructed from multiple materials with the same shading model, generates texture atlas at runtime, and bind textures once per render for all meshes, and then dispatch draw calls for meshes.
The layout of underlying texture atlas should be generated with efficient algorithm like potpack.
The texture atlas should be constructed in GPU with copyTextureToTexture() or copyTexSubImage2D() whenever possible, and mipmaps should be copied from source texture whenever possible.
UV wrapping and transformation can be applied per-mesh, and transformed in fragment stage.
Underlying material should have the same texture channels, or missing texture channel could be filled with colors that makes shading result looks like without the channel.
Existing per-material props and per-mesh props can be addressed using a mesh texture which contains per-mesh rendering options, collected for each mesh, binded and updated once per frame.

UV process pesudo code

in vec2 uv;

void main() {
    vec2 actualUV = applyTextureTransfrom(applyAtlas(applyWrapping(uv)));
    vec4 color = texture(baseColor, actualUV);
}

Rendering process pesudo code

for each mesh:
   collectMeshInfo();
updateMeshTexture();
bindMeshTexture();
bindMaterial()
bindLightAndShadow();

for each mesh:
   bindGeometry();
   bindMeshId();
   dispatchDraw();

Non-Goals

  • ShaderMaterial
  • NodeMaterial
  • MultiMaterial
  • CustomMaterial
  • MixMaterial

Alternatives

  • Use bindless texture so it’s possible to render without the need to bind, but this is only for webgpu without even a draft spec.
  • Unwrap the UV and bake it offline so all materials can become one, but this is complex and time consuming.
  • Create texture atlas and merge materials before importing models, causing uv warpping issues and uv precision loss.
  • Use BABYLON.TexturePacker to merge textures, and merge materials somehow.

Footnote

I know this change is huge and may not land in years, so I’ll just keep this as “draft”.

3 Likes

I like the idea. I’m wondering if it can be used with some kind of geometry atlas a la Nanite as well. Basically you set the geometry/texture budget and the engine determines alone what to load/unload.

Yeah, I think so, this targets mainly about how bind and draw happen, and efficiently create and use atlas from multiple material/texture, and should not conflict with a gpu-based culling/lod pass.
Also, since both mesh shader and multi draw indirect not landed on web there is still a long way to go.

There is already a tool for this. That I created a long time ago. It works with both Standard and PBR materials.

3 Likes

That’s great! A good place to start with, this works just like the model-targeted atlas generator, but there is still things remaining:

  1. Textures merged but not materials, the bind performance issue not addressed.
  2. The construction of the texture atlas is done on cpu, which means a lot of gpu-cpu copies are made, and the mipmaps are regenerated so there could lead to bleeding on rendering lower level mipmaps.
  3. Texture transforms not considered, and since it transforms mesh uv at vertex level, meshes uses uv warpping might change.
  4. UVs are dequantized and denormalized, might lead to more VRAM usage for quantized meshes.

It used the same material I’m pretty sure. Its been a few years but if I’m not mistaken. Yes there is some CPU load but it all is at time of incident so there is no impact after compilation. I dont see how you can do uv wrapping with an atlases texture unless we mod the value after it’s base uv cell is assigned. The uvs are actually normalized in this example so quantized ones I’m not sure how that is impacted.

1 Like

It just does not uses the same material after pack in the example playground. Also, the main thing I want to avoid is not the count or textures or materials, but the material.bindForSubMesh call, which can take massive amount of CPU time, merging textures and materials is the proposed way to this, the major change of this proposal is the rendering process.


Yes there is no impact compared to before packing, but this proposal is to change the rendering process to make it more efficient.

Current rendering model in pesudo code:

for each mesh:
    mesh._bind();
    material.bindForSubMesh();
    mesh._processRendering();// the draw call

Proposed rendering model:

for each atlas material:
    for each mesh:
       collectMeshInfo();// per-mesh props

    updateMeshTexture();
    bindMeshTexture();
    bindMaterial()
    bindLightAndShadow();

    for each mesh:
       bindGeometry();
       bindMeshId();
       dispatchDraw();// the draw call

To do this, uv wrapping need to be done at fragment stage, where uv is interpolated for the exact pixel.

Here quantized means the mesh (or vertex buffer) uses non-float attributes to save VRAM, like common UV mapping is float32 [0,1], 8 bytes per vertex, but quantized UV can be uint16 [0, 65535] normalized, 4 bytes per vertex, that’s half the size or original, where normalized mean GPU will map [0, 65535] to [0,1] in shaders.

1 Like

The material thing would be an oversight. They can all have the same material in theory since the atlas is bound the same. Would be worth a hot fix to correct that. Let me think about your other insights.

Great input btw. Sorry I’m on my phone off roading right now otherwise I’d be more engaged with you. But yes let’s get this all correct.

1 Like

Well, material can have props other than textures, like diffuseColor, emissiveColor, in case of StandardMaterial in this playground, that’s why a mesh texture is need, where these props can be kept and passed to shader.

1 Like

We can bind all that no problem. It will take some updates. I’ll pm you tomorrow and let’s discuss this more.