Performance of large, baked PBR texture sets versus small tiled textures

I’ll eventually get around to benchmarking this myself but was wondering if anyone had any experience comparing performance on a scene of meshes with predominantly:

  1. Large (e.g. >= 4k), baked PBR texture sets i.e. few materials overall
  2. Versus: Small (e.g. <= 512) tiled textures i.e. more materials overall

I realise more materials results in more draw calls, but larger textures take longer to download and I imagine there’s a performance cost to displaying larger versus smaller textures?

For example a sofa which has rough fabric, visible beading and stitching, chrome legs and a few different cushions and throw rug. That could be baked to a single PBR material with say 4 x 4k textures, OR use multiple materials with much smaller tiled textures.

I know the most common approach is just to bake out large textures with a single material, but for web deployment that just seems wasteful and lazy. I would like to think that with more careful planning (but also more materials) the file size and visual fidelity of models could be improved with little to no overall performance hit.

Am I delusional?


The second approach seems to be more universal and flexible (of course, it will always depend on the application).
First of all, some low end devices have memory limitations and do not support ‘big’ textures.
To check just run engine.getCaps().maxTextureSize
The second thing is that in case if you’ll need to change, for example, chrome legs to golden ones you’ll need another big baked texture too (or include all variants into one texture set).
So the second approach gives more control as well.
Also, with PBR materials there are a lot of cases when it is possible not to use any textures and get a great visual look just with some tuning of the native material properties.


I think @PirateJC and @PatrickRyan could offer some nice perspective here :smiley:

1 Like

@inteja I have not done any hard profiling about one material with a larger texture set - I normally limit myself to 2K as the largest possible texture size - and several materials. I think you could possibly get some trends with this, but there are a lot of variables that will have an impact and could possibly be tuned to find a middle ground between one and many materials.

One of the simplest is that more materials will mean more meshes to apply those materials to. So you are going to incur further draw calls from additional meshes when splitting materials. However, you could combine several materials (in your example, maybe a few cushions and legs in one material and the rug and sofa structure in another) into non-tiling atlases to maximize UV space. You can then use smaller texture sizes with the same texel density as all UV islands in one large texture and not lose any fidelity. This could get around any browser limitations that @labris mentioned, while still maintaining fewer materials and meshes resulting in fewer draw calls.

However, you could go about it another way. Really, the only texture that needs an atlas unwrap is the AO map. The rest could be a combination of tiling textures that are much smaller. A 128 x 128 texture of the sofa material could be more than enough if the texture tiles enough times on the model. In this way, you could use several UV sets to unwrap your model depending on the texture you will pass to it. This means you can have one larger texture, and many smaller textures based on UV set. In this This would also be a reduction in overall file size for download while maintaining fewer draw calls.

You could also go with a single node material and many smaller texture sets that wire into one material and break based on any number of methods (UV set, UV position, Vertex Color, etc). I have an example of this type of approach using UV position to break between texture sets that passes three different PBR sets of textures (two with emissive maps) each to a section of the single mesh in the scene.

You may find, however, that this method is slower based on fetching each pixel from the shader three times before selecting which one to return based on your scene or the size of your textures and it may be faster to split the meshes and materials.

I don’t think there is a set right answer here because there are a lot of moving parts. In some instances, using one material and mesh may be the best because you have a lot of meshes to load and manipulate in your complex scene. In other cases, you may be loading a single glTF and so having multiple meshes and materials isn’t that big of a hit to your performance.

@carolhmj is working on the final implementation of the performance profiler that @SahilTara built for his summer internship, so you will be able to use that to do some testing in the near future. In the meantime, I hope this helped frame your thinking about the issue. Feel free to bounce back more questions as you think through the problem.


Thanks so much for the awesome information @PatrickRyan! You’re always so helpful and comprehensive :slightly_smiling_face:

I haven’t used (or even considered) UDIMs / texture atlases before. There’s upside, but I guess some downsides are more complexity in creation and potentially less texture reusability/composability between materials? I’m ideally looking to create a material library system that loads textures on demand, reuses and composes them into a variety of materials for a given scene.

Your node material example is cool.

I’ll play around and be sure to benchmark each approach for my use case.

1 Like