How to Create HDR Environment Files for Dynamic Scenes?

According to the doc, it’s possible to create scene.environmentTexture from a CubeTexture, however it’s not HDR quality.

So how does one create a HDR quality file from a dynamic scene? (example: an apartment room with furniture and lighting created by the user).

And does Babylon support multiple HDR environments? (example: one HDR for living room, another for bathroom, etc.)

I could think of using a light probe placed at the visual center of the scene to create a CubeTexture, but how do I turn it into .hdr or .dds file quality? (ideally without external tools)

Found this old forum post on creating dynamic reflectionTexture using ReflectionProbes, with a PG. For some reason it wasn’t completed.

The PBRMaterial.roughness is ignored when using reflectionTexture. And reflectionTexture.adaptiveBlurKernel does not work, like in MirrorTexture. This makes ReflectionProbe only useful in very limited use cases.

So how does one blur reflectionTexture created by ReflectionProbe?

Is this still the current state @PatrickRyan?

I might experiment with this Camera approach (though not sure yet how to handle perspective distortion and joining seams automatically):

  • place a camera in the middle of you World Scene.
    • rotate it 90 degrees:
    • Make screenshot:
      • 4 shoots in four directions ( X, -X, Y, -Y )
      • 1 shoot up
      • 1 down.
  • repeat the process 3 times in your scene
    • 1 time underexposed ( going for shadows )
    • 1 normal lit scene and
    • 1 overexposed ( highlights )
  • process the 6 separate LDR images to 3 Cubemaps
  • pack them as one HDR Cubemap

@ecoin there is a new technique that @sebavan wrote a blog post about for real-time PBR filtering of light probes which may get you where you want to go. But that is going to be a bit expensive as it needs to be computed every frame so could be challenging if you target low end devices. It is also overkill if you don’t have dynamic objects in your scene so you will need to render once and keep that.

If you want to create a true HDR IBL in scene, we can’t do that as we don’t render HDR values into render target textures. And we don’t have a way to distort the image to make the cube seamless, so you would have to do that manually. In this thread, I go over the process of taking a panoramic image and adding the distortion to it before using it as a cube map. But unfortunately, if you are trying to render a prefiltered IBL, you won’t be able do to it from the scene due to the lack of HDR data in the scene.

2 Likes

Hi Patrick, thank you for detailed explanations, again.

I did read through the links you posted before asking this question (and they helped a me a lot, your detailed explanations with visual examples made it very easy for newbies like me to grasp a new concept - they should probably go into the official doc as is!).

My approach is indeed to use realtime filtering like what Sebavan wrote about, with subsequent update on demand (when Camera moves, something moves, etc.). In fact, I made the entire scene static and only render on demand, so I can afford to use more expensive fancy Babylon stuff :slight_smile:.

The only problem I currently face is getting HDR raw values for the 3 Cubemaps. Image manipulation is not much of a problem, because there are plenty of browser based image tools in frontend space.

Do you think this would work to get the correct float values to build HDR?

Underexposed - I’m thinking of dimming the light intensity, and removing specular light entirely.
Overexposed - This part I’m not sure yet how it could be done in Babylon - maybe by adding HemisphericLight?

If I did not understand HDR creation correctly, then maybe you could point me to articles/link that describe the process of creating HDR programmatically from given LDR png files?

1 Like

@ecoin, the process of creating an HDRI through traditional photographic means is as you describe here in that we would take a bracket of images (say -2 to +2 EV giving us 5 brackets two overexposed and two underexposed around the normal exposure). Combining this data will allow us to shift the exposure in the image and still retain shadow detail as well as highlight detail while giving the image a different look. How this translates for IBL is that we shift the exposures from -2 to +2 to pixel values of 0 to N. The pixels that are black in the most overexposed bracket (+2) will be at 0 and the pixels of white in the most underexposed bracket (-2) will be N. These images are then combined together with into a 32-bit floating point image so that you have the entire range of data, not just the LDR range of 0-1.0 for each channel.

In terms of EV and values, each EV is a linear measure of how much light hits the CCD in a camera with the amount of light doubling in each band. This also translates to pixel values per range. Let’s say that I have a pixel in a 5EV bracket that in each image is white (255, 255, 255). The actual value of that pixel as an HDR float would be 32. The value of 1.0 doubles for each bracket which can be easily illustrated in Photoshop where I used an LDR value of (255, 255, 255) and added a 5 EV exposure increase. You can see in the info window that the original value was 1.0 and the exposure shifted image is 32.

So this quickly illustrates how you can get very bright pixel values in high EV range HDRIs that are shot outdoors. These can have RGB values in the hundreds of thousands for pixels representing the sun. So then how do we simulate that type of value from a Babylon scene. Honestly, it’s not super straight forward.

I can easily render an HDRI in a DCC package like Maya because I can set the values of the lights to be as bright as I want and bounce light around a scene and I have all of the floating-point values in the render and only need to save the render in a file format that retains the float values like .hdr. But in the case of a Babylon camera, we are not rendering any values above 1.0. You could try to trick it by lowering lights by half and calling that an underexposed EV and then calculating all pixels based on their respective values in each image assuming the underexposed EV was twice as bright as the original, but there is likely an easier way to do this.

If you used a tone mapping curve applied to the image which changed the range from 0-1 to a range of 0-N, this may be the most straight forward way to do it. It could be linear, but you could also use a eased curve to control the tone mapping in case you want to emphasize different ranges of tones. Then you would likely need to convert to an env format to use it in the scene. The env is an RGBD LDR format where the alpha channel is a mask acting as a divisor on our maximum HDR value to shift the RGB pixel into HDR space.

Going with the env format would allow you to plug this straight in as an IBL in the scene, but you would need to do something to encode the env like @sebavan is doing with the IBL Texture Tool which has source available on github. But I would lean on him to better explain the path and challenges of converting a rendered image to the env format.

I hope this helps somewhat.

4 Likes

Thank you so much, Patrick! I finally got a complete picture of how HDR works.

I was struggling to understand how it worked to solve this use case reading on the web. Most articles were either too technical (getting into complex math formulas) or too academic (pure theories with no practical info).

You definitely have a knack for teaching people :rocket:!

I will get the Camera to produce Cubemaps first. One step at a time.

1 Like

Hi @PatrickRyan!

After many architectural design drafts, calculations and testings, I found that procedurally generated HDRI can achieve very high level of realism with PBRMaterials alone. It produces very accurate lighting without the use of expensive ReflectionProbes. And combine it with a static scene/assets, and good lightmap/shadow baking, you get almost AAA like quality scene.

This, I believe can be very useful to many Babylon uses, including games. So I’d like to get your opinion on the best strategy to make this feature optimized for performance. I’ve come up with two, both have pros and cons.

I. Standard Procedural HDR

  1. Camera Panoramic Mode takes snapshot of the scene in 360 degrees equirectangular projection
  2. Apply tone mapping algorithm to the snapshot to shoot up exposure levels
  3. Save as file.hdr ObjectURL in memory
  4. Load as reflectionTexture pbr.reflectionTexture = new HDRCubeTexture('file.hdr', scene, 512, false, true, false, true)

Pros:

  • HDR generated procedurally can be saved for importing to other DCC tools.

Cons:

  • HDR files are too large for loading on the web (averages 50-100 MB for a typical apartment scene)
  • Inefficient - unnecessary round trip computation, because internally Babylon requires a prefiltered RGBD Cube Texture.

That leads me to the second, more sound strategy.

II. Babylon Procedural HDR

  1. ReflectionProbe/Camera takes snapshot of the scene in 360 degrees to create CubeTexture
  2. Apply tone mapping algorithm to the CubeTexture to shoot up exposure levels
  3. Save Texture in Babylon’s RGBD hdr.env format in memory
  4. Load as reflectionTexture pbr.reflectionTexture = CubeTexture.CreateFromPrefilteredData('hdr.env', scene);

Pros:

  • Scalable - no matter the project size, users only have to download backed lightmaps to load the scene
  • Performant - no round trip conversion to equirectangular Texture, then to CubeTexture, then to RGBD.

Cons:

  • Cannot be used in DCC tools (but Blender can easily generate HDR anyway).

I think the second strategy is the clear winner. However, I was not able to find much info on the exact data structure used by Babylon’s prefiltered .env file at the buffer level, besides this high level summary.

Specifically, could you please give a brief example of how to construct such Babylon’s hdr.env from scratch? A low level, pixel by pixel explanation? Like what should the headers contain, is binary data stored as [r, g, b, d...]? What values should r, g, b and d be for the brightest and darkest values in the scene, etc.)

You could modify the first one in a step 3.5, which loads the file.hdr into the env creator page, Babylon.js Texture Tools, which patrick already posted. This would greatly reduce the file size. Then replace step 4 with option 2’s step 4.

You might still prefer the 2nd option, but it gets rid of the cons. Now you could just focus on which camera gave the best results.

1 Like

At this point I will ping @sebavan to speak about the process of creating the env file as he is the one who implemented it in engine. And whatever I say would need to be checked through him anyway. But yes, I believe that either version, with @JCPalmer’s addition to the process, would be worth prototyping.

From an initial glance, I would expect them to work but you may find some pain points. One of them would be the generation of the equirectangular RTT. This isn’t a feature yet in engine, but we have it on our wish list. There was a thread about this elsewhere from @bnolan which may help.

2 Likes

I thought of this before, but it has major network bottle necks. Even if it’s built-in to the app running in browser-only, there are still too many conversion steps, which are all CPU intensive.

A typical optimized 1K .hdr file is 1.5-2 MB in size, and a typical scene has 10-30 procedural .hdr files, totaling 1.5*30 ~= 50 MB:

  • What if users have slow CPU?
  • What if users have slow internet?
  • What if users have expensive internet? (I myself used to live on 2GB of data per month as a student)
  • A site with 1M visits/month, averaging 5 page views/visit, will consume 50 MB * 1M * 5 = 250 TB/month

=> It becomes very slow/expensive just to view/host the site.

This is great, thank you for pointing out.

Yes, but with option 2, this is no longer a problem. You can convert equirectangular to Cube texture from what @bnolan has created.

The major advantage of option 2 is that it bypasses network upload/download/conversions entirely, which should be much faster.

1 Like

While we wait for @sebavan’s answer on low level RGBD .env encoding, can someone point me to a good source explaining how to encode a PNG image to .HDR in the browser?

I’m struggling to understand this low level buffer encoding, because there are plenty of .hdr → .png encoding examples on the web, but nothing for reverse conversion in Javascript.

It’s made more complicated because there are more than 3 encoding types, and even for the same encoding, the formular differs on various sources. So I’m confused.

Basically, an equirectangular screenshot PNG Blob needs to be encoded as Radiance 32-bit_rle_rgbe ArrayBuffer to feed to BABYLON.HDRTools.GetCubeMapTextureData().

Obviously, with Seb’s help, we can skip this PNG → HDR encoding entirely, and take screenshot as CubeTexture PNG → RGBD .env directly.

But for now, it would be good to have the first prototype because I already have everything else working. And it’d be a good option to enable Babylon users to export HDR files from any scene.

1 Like

From a generated Probe CubeTexture, you can use our HDRFiltering class to prefilter and generate the correct mip map chain:

const hdrFiltering = new HDRFiltering(engine);
hdrFiltering.prefilter(texture, onLoad);

Then from a prefiltered one, you can generated the .env from the following tools:

EnvironmentTextureTools.CreateEnvTextureAsync(texture)
            .then((buffer: ArrayBuffer) => {
                var blob = new Blob([buffer], { type: "octet/stream" });
                Tools.Download(blob, "environment.env");
            })
            .catch((error: any) => {
                console.error(error);
                alert(error);
            });
3 Likes

Hello @ecoin just checking in, do you have any further questions? :slight_smile:

Hi Carol! not for now, didn’t yet have time to work on this feature yet.