Yes I’m sure there are better, more robust ways to do light source detection in images. It’s interesting when you mentioned shadow detection, as that could be used to trace the light back towards the source, even if the source is off camera in the image. And your earlier post about finding the real-world Sun angle based on geographic coordinates and time of day is intriguing. This could be used in a number of different applications, path tracing outdoor scenes in real time being just one of them!
My previous 2 posts about doing this light detection only work if the Sun is visible (or partially covered by a thin cloud layer), and only if the image was taken outdoors because the Sun dominates all other human-made light sources, and therefore it is easier to separate and identify algorithmically.
Although my simple approach does the trick for now, we will need a more sophisticated approach for detecting light sources in indoor HDR images, where there could be multiple arbitrarily shaped lights, or in some images I’ve encountered, no visible light source at all, just ambient room light coming from a window off camera!
To help me get started figuring out some of the math in my posted algos, I followed the pbr book light sampling link that was suggested by a three.js renderer user and forum participant. In the pbr book (the 3rd edition which is now free online and is pretty much the bible for CG graphics), they explain some of the x,y coordinates to Spherical angles, and then the Spherical coordinates to Cartesian conversion that I used in my first post about HDR light detection. I wouldn’t have figured that math out on my own! Ha. But the reason I mentioned this book is that later in the same chapter, it gives a technique to loop through all the pixels in any HDR image (indoor or outdoor, lights visible or lights off camera), and it actually builds a lighting probability density distribution as it goes from pixel to pixel, and then when you get to the end of the loop, you have sort of an importance ‘light’ map that you can directly importance-sample from when path tracing. Because they use this more sophisticated approach (with math and probabality algos that are beyond my understanding still), the end result is still considered unbiased rendering, which is really cool. In other words, if you actually placed a real world scene in that HDR spherical environment, we can expect the rendered outcome to match reality to the best of our human ability.
I would like to incorporate this ability/technique as well as your earlier ideas, but I will have to continue to study these approaches until I can visually ‘see’ the overall picture and be able to say in non-math speak what is going on under the hood (like I did hopefully in my previous 2 posts ).
Thanks for sharing and the inspiring info!