@BNARob, welcome to the community! I will do my best to break down some techniques, but feel free to reach out if anything is too deep or I don’t cover enough. If you are aiming for ultra-realism in real time you need to start with a few questions. I’m starting with these because I can’t see what your results are so am giving you some broad questions to hopefully lead you to the right solution.
-
Is my scene entirely static? By static, I mean that you are presenting the scene as a slice in time to the user. There is nothing that changes in your scene like time of day (which will change lighting if there is any influence from the sun), positions of any meshes (like the user moving objects around the scene, which will change shadows and bounce light), or user customization (allowing users to change the material properties of objects in your scene which will affect bounce light or reflections).
-
If my scene is not entirely static, what are the limits any possible interactions? If you need to allow user interaction, can you limit the types of interaction or know what the extremes are? For example, if you are showing a digital walk-through of an apartment and want to show what it looks like at different times of day, can you set up a couple of pre-determined times rather than giving the user complete control? Like a version showing mid-day, and one showing sunset. Any limits you can place on customization can help you dial in textures and methods to hit 60 fps and make the experience smooth.
-
What devices will my users view this content on? This is probably the most telling question for how you approach your scene. If you expect users to be on low-end devices, devices without broadband service, or browsers that do not support the latest WebGL2 features (https://get.webgl.org/webgl2/) you will need to make some trade-offs to keep performance high across all devices.
Once you know that, you can start to test out prototypes to see where you are falling on your target devices. Simple tests can be for mesh weight (number of triangles) and texture size (which affects download time) to see where you have performance that is undesirable.
I am only guessing that you are creating some sort of interior space with some windows to the exterior, but you should be able to extrapolate from here to your project. Your tests should tell you where you should fall in terms of texture size and then the question becomes the number of textures you need to download. There will need to be some experimentation to understand how much physical area you need to cover with your textures and when those textures do not have enough texels to represent your model at high fidelity.
Here are some things you can do to help:
- Tile textures when you can to help with texel density versus texture size.
- Utilize multiple UV sets to separate base color from light/shadow maps. This will allow you to use tiling textures while still being able to bake light maps which need to be baked as an atlas and cannot be tiled. Note that glTF supports at least two UV sets, but does not require support for more than two, so for compatibility, you should limit your models to two.
- Split large models into smaller chunks to maximize texel density in your textures and utilize a maximum of your UV space
- Split models based on material type. If you have meshes that need transparency or other special needs like clear coat, sheen, anisotropy, etc., group those meshes into the same materials or mesh groups so that you can UV effectively for the extra textures or set similar blend modes in the materials.
- Split your meshes or materials to minimize the number of textures to download. You can assume that using 1K textures is going to be more optimized than using 2K or 4K textures, but smaller textures mean you have many times the number of textures, returns diminish quickly.
So to your specific questions, I would always separate out the ambient occlusion and light/shadow maps from the diffuse/base color textures. They don’t need their own UVs unless you are using tiling textures for diffuse/base color, in which case the diffuse/base color will need one and the AO, Light, and Shadow maps would use another. This helps because if you don’t have a fully static scene (light changes, materials change, mesh positions change) you don’t have any lighting information baked into the base color. This helps when you move a light source which allows all of the maps to adjust correctly to the scene lighting (punctual lighting as well as IBL for PBR).
Reflection in your scene comes from a couple of places. One is your image-based lighting (IBL) and the other is from either a reflection texture or a reflection probe. With a static model, you could bake all lighting and reflection into your diffuse/base color and while you will likely need to increase your texture size somewhat and would not be able to use tiling textures, but you would be able to reduce the overall number of textures (one per material rather than multiples) and render each material as an unlit material requiring no lights or lighting calculations in your scene. You will get a high quality scene on a low end device with this method, but you can’t change anything. Otherwise, with any change, you need to handle reflections separately from everything else.
IBL tends to be more generic since it usually starts with an high-dynamic resolution image (HDRI) that contains 32-bit information for simulating lighting in your scene. So if you have highly polished elements in your scene like a chrome fixture, you may need to render out your own HDRI from your digital content creation (DCC) tool like Blender, Maya, Max, etc. This will make your lighting and reflection correct, but even then you are limited to a static scene. If you start changing your scene around (material colors, mesh positions, etc) you will end up with reflections that no longer match.
This brings us to the feature that brings you closest to realistic rendering, but is also very expensive and may not work on browsers that do not support WebGL 2. That is real-time reflection probes and real-time filtering. Basically, this allows you to render reflection cube maps in real time and filter them in real time for roughness calculations in PBR reflection. https://doc.babylonjs.com/how_to/how_to_use_reflection_probes
You can see that there are a lot of avenues that you can take to do things like this and I didn’t even step into custom shaders through node materials where you could pass custom textures to optimize or unlock techniques for your scene. There is a lot to take in here and I only brushed the surface. If you want to dive deeper, please make a playground so we can see where you are having some difficulty. I would also start a new thread just for your questions so everyone can find the conversation easily. In the mean time, here are a couple of links that can help as well with lighting and baking lights:
Hope this helps you get started moving toward what you want to render.