I’ve discovered that using a single lighting setup across materials with varying surface properties makes it challenging to maintain optimal exposure for all objects.
For example, when using the same light settings, a dark shoe typically requires higher light intensity compared to a white sneaker. While I’m currently using KHR_neutral tone mapping, which helps to some extent, it doesn’t fully solve the issue.
I’m looking for an efficient solution to dynamically adjust lighting intensity/exposure based on the loaded model’s overall surface characteristics without significant performance overhead. Specifically, is there a way to quickly analyze a loaded model’s PBR material properties (such as median diffuse values) and use this information as a multiplier for either the lighting or materials to implement ‘dynamic auto-exposure adjustment’?
Here’s a naive implementation of dynamic exposure.
it looks at a downsample of your scene, averages luminance, and adjusts exposure accordingly.
this is bad. has flickers (uses average) and is slow (a lotta iteration in frag shader).
you could make this better by.
doing a readpixels on the downsample buffer (at a lower refresh rate) and doing the average on the cpu. and then smoothly interpolating between results.
not using average at all, using median instead.
using TAA like temporal smoothing on the downsample buffer.
it is my understanding you can do a median on the gpu by preparing a texture that has a percentile based distribution of histograms? this would be the best quality and fastest result if can get this.
I apologize for the confusion. Actually, my need is only to analyze the model’s overall diffuse color and compute a new exposure value once, when the model is loading into the scene. I don’t need to adjust exposure dynamically for every frame during rendering. I just need to do this adjustment once for each model load.
Given this case, what would be your suggestion for the least overhead solution? This exposure adjustment doesn’t need to be accurate; it just needs to be done quickly.
I just wanted to follow up with some results from my own testing, but I’ve run into a blocker.
Here’s the approach I’m taking:
Step 1: Create a dedicated sampleCamera that only captures a low-res (32×32) RTT of the current lighting. Step 2: Pass the generated luminance RTT to the CPU, compute a histogram, find the median (or P70 in my PG), and then calculate a suggestedExposure. Step 3: Adjust the image processing exposure based on the suggestedExposure to achieve the auto-exposure effect. This process only happen once right after model get loaded, trying not to update exposure every frame since it might be expensive on mobile devices. Please let me know if I am on the right direction…
Right now I’m stuck on Step 2. I can generate the aeY_RTT correctly, but I can’t seem to calculate the histogram properly. Here is my PG:
This is just my first iteration — eventually I plan to move the histogram calculation onto the GPU (per earlier advice), but for now I’m already hitting a roadblock.
Btw, I have decided to measure render result directly to find the suggested exposure instead of using the albedo texture or other textures from the 3d model. You are right, the histogram of the render should give us the best accuracy.
I’m still quite new to low-level graphics programming, so I’m not sure if I’m on the right track. Any help or guidance would be greatly appreciated!
I can generate the aeY_RTT correctly, but I can’t seem to calculate the histogram properly. Here is my PG:
But it looks fine tho? I don’t see the problem
trying not to update exposure every frame since it might be expensive on mobile devices. Please let me know if I am on the right direction
Really depends on your use case, if what you’re showing is a somewhat small static scene where there aren’t a lot of lighting changes then this is fiiiiiiine
eventually I plan to move the histogram calculation onto the GPU (per earlier advice), but for now I’m already hitting a roadblock
If this is fast enough and good enough to cover all of your cases, I don’t see why go the extra mile to do these things.
You are right, the histogram of the render should give us the best accuracy.
if all you want is a percentile on the cpu, then you don’t need to make a histogram, you can simply sort the array and get the percentile you want. (hist is faster but sorting < 1024 values is virtually no overhead)
so I’m not sure if I’m on the right track
you tell me! xD
do you have cases where the exposure you want doesn’t match up with what’s suggested?