Can we achieve this kind of quality in our BabylonJS?

Hi,

I have a very basic question. I request you to guide me in this.

Please look at the attached image. It is so realistic.

  1. Can we achieve this kind of quality in our BabylonJS?
  2. Can we achieve this with BabylonJS at realtime ? I mean without pre-baking it for hours?
  3. Can we achieve this with DirectX at realtime?
  4. What all I need to do and have to achieve this kind of quality realtime?

My application is a web application in which the user can select a room like shown in the picture and start applying various textures to floors and walls. The output should be so realistic. As it is an interactive application, we cannot afford hours of baking. If we can achieve it with BabylonJS, great. If not, If we can achieve with DirectX, may be I load the model at the server using SharpDX and WARP (Windows Advanced Rasterization Platform (WARP) Guide - Windows applications | Microsoft Docs) and take a screenshot and deliver it back to the client.

Please help me on this topic. Thanks in advance.

1 Like
  1. Well it depends on what you mean by quality :slight_smile: This one is a raytraced rendering. it could be done with babylonjs but you ahve to provide prebaked lighting for instance.
  2. Yes with prebaked lighting. The reflection can be done with really high quality cubemaps. But prebaking for hours will be necessary. The point here is mostly about really high resolution textures and antialiasing + global illumination
  3. Probably on latest 2080TI + RTX
  4. My best hint would be to say: try :slight_smile: Create a simple model in blender and try to get as close as you can to your goal using our more advanced options (lightmapping will be key: From Blender to Babylon - standard workflow)
2 Likes

Please educate me on this topic with some referenCe material/documentation/samples

  1. what is pre-baked ligting?

In my scene, the positions of the lights are fixed.
User selects a texture image from a list (thumbnails are shown in a list of to pick) on clicking of a mesh and the texture should be applied to the mesh. The texture image will not have any reflections but once it is applied to the mesh, the reflections should be shown to make it realistic

pre-baked lightning is also known as lightmap

so no matter what your textures are, the lightmap can hold all the lightning and shadows
For instance for our Espilit demo (Babylon.js - Glowing Espilit demo) this is the lightmap used:

2 Likes

So, once the light map is built, is built for a model, it can be reused is it?
In that case, how to build that light map (with raytraced rendering)?

this is another topic and not for this forum directly :slight_smile:
Which tool are you using ? 3dsmax, maya? blender?

1 Like

Maya.

So yeah, you need to find some documentation on how to bake light in maya
Let me try to see if the all mighty @PatrickRyan can give you some pointers

1 Like

Please correct me with my understanding.

  1. The light map image is always used as Ambient texture.

  2. When we need to change the texture image of a mesh, we need to keep the ambientTexture of the material of the mesh as is and change the diffureTexture. If we do so, the lighting and shadows will be preserved.

I just made public a demo we’ve made on my company, which may give you an idea about what you can expect : Virtual Staging - Apartment configurator ; by Axeon Software

And yes your lightmap will only contain lighting informations, so you can change albedo while keeping your lighting.

4 Likes

@Subrahmanya_Chakrava the best way to think of how you need to configure your assets for Babylon is to think of them as real time game assets. You can get high quality rendering out of a real-time engine, but you can’t approach the assets as you would if you were ray tracing them.

This breaks down into two main issues when converting from pre-render workflows to real-time render workflows:

  • Shadows, including self shadowing
  • Global illumination and reflections

For shadows the trade off you need to make is that to get soft shadows that are impacted by bounced light, or global illumination, you will need to bake your shadows into a light map for your object, so you are limiting the object to not moving in the scene and always being in the same position in your scene. You can get soft, real-time shadows but those are more expensive to render at run time and won’t be affected by global illumination as we aren’t calculating any bounces in the lights (which would lead us to real-time ray tracing that requires a lot of compute power). Self shadow is also a more expensive method that is easily solved by baking your lights. You are more performant with a shadow map than with self shadows in real time, but you further lock down the asset as it can’t have animation or the baked shadows become obvious.

A great tutorial for baking light maps in Maya is available at Pluralsight. It is a little older, but the principles are still the same. This is the best way to get soft shadows affected by global illumination.

The other part which is global illumination and reflection is a harder solve in real time. Baking your lights into a shadow map will add the global illumination into the shadow maps, but you still need to account for illumination for a bounce. Consider this image from Unity:

It deals with both of the challenges we face. Objects close to a colored wall will receive color contribution from the global illumination or bounced light. The sphere near the green wall takes on a green cast in its lighting calculation. And the metallic sphere in the center reflects the environment. One of the issues, the reflection, can be solved with reflection probes which bake a local cubemap which is used for reflection. The problem is that you are baking 6 images every frame to account for objects moving which can get expensive very quickly.

And for the environment lighting, you need to provide a precomputed DDS so that we can use the mip levels as a substitute for calculating the specular lobe based on roughness per pixel every frame. I’m currently talking about environment maps in another thread on the forum, so I won’t repeat it all here.

So, if you bake an environment map of the room your scene is set in, you can get both reflection and image based lighting (IBL) out of the one file, but the user cannot interact with things in the scene, i.e. move furniture around or replace it. However this will render the fastest in real time.

If you need to make use of interaction with the objects within your room, you will need to likely make use of two tricks. The first would be to bake out an environment map of your room empty for IBL and then use reflection probes to handle all of the reflections of dynamic objects.

The downside here is that baking shadows across objects is not going to work. You can bake self shadows into an object like a chair, but you won’t be able to bake the shadow into the floor asset. The thing you will need to do is create a plane under the object that catches the shadow and when you assemble the scene, the texture on the shadow catcher uses a multiply blend mode in the material so that it will look like it affects the floor asset.

To get the quality of render you showed as your example, you will never get completely away from baking some textures until our everyday hardware catches up to top of the line GPUs. Until then you can break down the problem in to manageable chunks and will have to use some smoke and mirrors and some trade offs to get the assets to behave the way you want. I will say that this is a very typical workflow for game engines today. And there are a lot of resources available online for creating realistic game assets. If you frame your queries around game assets, you will find everything you need. Let me know if you have more questions.

4 Likes

@PatrickRyan Thanks a lot for the detailed solution.After reading your answer, I learnt many thing which made me realize that I knew nothing.

I will try implementing your guidelines and update you my progress.

Thanks again for helping me.

1 Like

Hi Patrick,

I know this is a more than year old post, but I came across it recently while looking for help with baking textures from Blender to bring into Babylon JS and it seemed to be one of the most helpful posts I’ve come across. I’m still struggling with the fundamentals of what’s required, though. I’ve done some tests with a Combined Map out of Blender and am getting reasonable results in Babylon-but still not quite right. Do I need individual passes (diffuse/albedo; ambient occlusion; shadows; etc.) for each UV? What about reflection-where does that come from? I found this very comprehensive walkthrough of the light mapping process…

,but, as a relative noob to both Blender and Babylon (and very much the baking process) I didn’t fully follow all of what’s outlined there and found it perhaps too complex. Is that really the workflow required?

Any help much appreciated-feel free to be as condescending as you like, assuming I know nothing!

Cheers

1 Like

@BNARob, welcome to the community! I will do my best to break down some techniques, but feel free to reach out if anything is too deep or I don’t cover enough. If you are aiming for ultra-realism in real time you need to start with a few questions. I’m starting with these because I can’t see what your results are so am giving you some broad questions to hopefully lead you to the right solution.

  • Is my scene entirely static? By static, I mean that you are presenting the scene as a slice in time to the user. There is nothing that changes in your scene like time of day (which will change lighting if there is any influence from the sun), positions of any meshes (like the user moving objects around the scene, which will change shadows and bounce light), or user customization (allowing users to change the material properties of objects in your scene which will affect bounce light or reflections).

  • If my scene is not entirely static, what are the limits any possible interactions? If you need to allow user interaction, can you limit the types of interaction or know what the extremes are? For example, if you are showing a digital walk-through of an apartment and want to show what it looks like at different times of day, can you set up a couple of pre-determined times rather than giving the user complete control? Like a version showing mid-day, and one showing sunset. Any limits you can place on customization can help you dial in textures and methods to hit 60 fps and make the experience smooth.

  • What devices will my users view this content on? This is probably the most telling question for how you approach your scene. If you expect users to be on low-end devices, devices without broadband service, or browsers that do not support the latest WebGL2 features (https://get.webgl.org/webgl2/) you will need to make some trade-offs to keep performance high across all devices.

Once you know that, you can start to test out prototypes to see where you are falling on your target devices. Simple tests can be for mesh weight (number of triangles) and texture size (which affects download time) to see where you have performance that is undesirable.

I am only guessing that you are creating some sort of interior space with some windows to the exterior, but you should be able to extrapolate from here to your project. Your tests should tell you where you should fall in terms of texture size and then the question becomes the number of textures you need to download. There will need to be some experimentation to understand how much physical area you need to cover with your textures and when those textures do not have enough texels to represent your model at high fidelity.

Here are some things you can do to help:

  • Tile textures when you can to help with texel density versus texture size.
  • Utilize multiple UV sets to separate base color from light/shadow maps. This will allow you to use tiling textures while still being able to bake light maps which need to be baked as an atlas and cannot be tiled. Note that glTF supports at least two UV sets, but does not require support for more than two, so for compatibility, you should limit your models to two.
  • Split large models into smaller chunks to maximize texel density in your textures and utilize a maximum of your UV space
  • Split models based on material type. If you have meshes that need transparency or other special needs like clear coat, sheen, anisotropy, etc., group those meshes into the same materials or mesh groups so that you can UV effectively for the extra textures or set similar blend modes in the materials.
  • Split your meshes or materials to minimize the number of textures to download. You can assume that using 1K textures is going to be more optimized than using 2K or 4K textures, but smaller textures mean you have many times the number of textures, returns diminish quickly.

So to your specific questions, I would always separate out the ambient occlusion and light/shadow maps from the diffuse/base color textures. They don’t need their own UVs unless you are using tiling textures for diffuse/base color, in which case the diffuse/base color will need one and the AO, Light, and Shadow maps would use another. This helps because if you don’t have a fully static scene (light changes, materials change, mesh positions change) you don’t have any lighting information baked into the base color. This helps when you move a light source which allows all of the maps to adjust correctly to the scene lighting (punctual lighting as well as IBL for PBR).

Reflection in your scene comes from a couple of places. One is your image-based lighting (IBL) and the other is from either a reflection texture or a reflection probe. With a static model, you could bake all lighting and reflection into your diffuse/base color and while you will likely need to increase your texture size somewhat and would not be able to use tiling textures, but you would be able to reduce the overall number of textures (one per material rather than multiples) and render each material as an unlit material requiring no lights or lighting calculations in your scene. You will get a high quality scene on a low end device with this method, but you can’t change anything. Otherwise, with any change, you need to handle reflections separately from everything else.

IBL tends to be more generic since it usually starts with an high-dynamic resolution image (HDRI) that contains 32-bit information for simulating lighting in your scene. So if you have highly polished elements in your scene like a chrome fixture, you may need to render out your own HDRI from your digital content creation (DCC) tool like Blender, Maya, Max, etc. This will make your lighting and reflection correct, but even then you are limited to a static scene. If you start changing your scene around (material colors, mesh positions, etc) you will end up with reflections that no longer match.

This brings us to the feature that brings you closest to realistic rendering, but is also very expensive and may not work on browsers that do not support WebGL 2. That is real-time reflection probes and real-time filtering. Basically, this allows you to render reflection cube maps in real time and filter them in real time for roughness calculations in PBR reflection. Reflection Probes | Babylon.js Documentation

You can see that there are a lot of avenues that you can take to do things like this and I didn’t even step into custom shaders through node materials where you could pass custom textures to optimize or unlock techniques for your scene. There is a lot to take in here and I only brushed the surface. If you want to dive deeper, please make a playground so we can see where you are having some difficulty. I would also start a new thread just for your questions so everyone can find the conversation easily. In the mean time, here are a couple of links that can help as well with lighting and baking lights:

Hope this helps you get started moving toward what you want to render.

8 Likes

Wow! Thanks so much for the very detailed response / links to further info. I have to confess: not all of it makes complete sense to me-but it is nonetheless very helpful.

In answer to your initial points / questions: my scene will be entirely static; basically a room with some objects / products inside, with very basic, non-directional lighting (just an overhead area light). I’ve managed to get fairly good looking results with baking everything into the base colour-think I’m definitely moving in the right direction.

Re reflections… I’m sort of hoping to have a subtle mirror effect on the floor material. I found the reflection probes article / playground you linked to previously-will maybe look at integrating it into the floor and test performance. I’m working with a developer, and am actually pretty code ignorant personally-so when I say “I’ll” look at integrating it…

You mention starting a new thread so that others can find this info… Is there a way to port this directly to a new thread, or should I just open one and link back to this?

Cheers

1 Like

@BNARob If you get to a point and want to share a playground, create a new thread and drop a link to this thread. I say this so that we can get a thread title that associates with your specific question or project. It’s much easier to find pertinent threads if the thread titles are descriptive to what the thread is covering. Feel free to ping us with questions as you are working through all of this.

1 Like