Path-tracing in BabylonJS

Hello @Necips

Yes I’m sure there are better, more robust ways to do light source detection in images. It’s interesting when you mentioned shadow detection, as that could be used to trace the light back towards the source, even if the source is off camera in the image. And your earlier post about finding the real-world Sun angle based on geographic coordinates and time of day is intriguing. This could be used in a number of different applications, path tracing outdoor scenes in real time being just one of them!

My previous 2 posts about doing this light detection only work if the Sun is visible (or partially covered by a thin cloud layer), and only if the image was taken outdoors because the Sun dominates all other human-made light sources, and therefore it is easier to separate and identify algorithmically.

Although my simple approach does the trick for now, we will need a more sophisticated approach for detecting light sources in indoor HDR images, where there could be multiple arbitrarily shaped lights, or in some images I’ve encountered, no visible light source at all, just ambient room light coming from a window off camera!

To help me get started figuring out some of the math in my posted algos, I followed the pbr book light sampling link that was suggested by a three.js renderer user and forum participant. In the pbr book (the 3rd edition which is now free online and is pretty much the bible for CG graphics), they explain some of the x,y coordinates to Spherical angles, and then the Spherical coordinates to Cartesian conversion that I used in my first post about HDR light detection. I wouldn’t have figured that math out on my own! Ha. But the reason I mentioned this book is that later in the same chapter, it gives a technique to loop through all the pixels in any HDR image (indoor or outdoor, lights visible or lights off camera), and it actually builds a lighting probability density distribution as it goes from pixel to pixel, and then when you get to the end of the loop, you have sort of an importance ‘light’ map that you can directly importance-sample from when path tracing. Because they use this more sophisticated approach (with math and probabality algos that are beyond my understanding still), the end result is still considered unbiased rendering, which is really cool. In other words, if you actually placed a real world scene in that HDR spherical environment, we can expect the rendered outcome to match reality to the best of our human ability.

I would like to incorporate this ability/technique as well as your earlier ideas, but I will have to continue to study these approaches until I can visually ‘see’ the overall picture and be able to say in non-math speak what is going on under the hood (like I did hopefully in my previous 2 posts :smiley: ).

Thanks for sharing and the inspiring info!

4 Likes

Hi All,

Just checking in with a quick update. I recently figured out how to create and extract Babylon’s notion of an empty 3D transform. The equivalent in three.js was THREE.Object3D() - it didn’t have any geometry or materials associated with it, it was just an empty transform, kind of like a gizmo object in a 3D editor. Browsing the Babylon source code on GitHub, from what I can gather, I think the equivalent in Babylon is BABYLON.TransformNode(). I was able to assign an individual TransformNode to the spheres in our test scene and then on the Javascript setup side, that allowed me to perform simple operations on the transform using familiar Babylon commands, i.e LeftSphereTransformNode.position.set(x,y,z), LeftSphereTransformNode.scale.set(x,y,z), and LeftSphereTransformNode.rotation.set(x,y,z).

This lets the user-side code be much more flexible, rather than having to hardcode those object parameters in each path tracing shader. The flipside is that in the ever-growing path tracing library shader includes file, I had to create a special sphere-ray intersection routine called UnitSphereIntersect(ray), in which it doesn’t take any scaling (sphere radius), rotation, or translation (sphere position) into account, but rather intersects a ray with an untransformed unit sphere (radius of 1) centered at the origin (0,0,0). One of ray tracing’s greatest abilities is that you can have such a simplified ray-sphere function, but then instead transform the ray by the inverse of the desired sphere object’s transformation. It resembles what we do for 3D scene objects and the camera inverse to correctly display the transformed objects out in the scene.

That’s why I have been working on getting this TransformNode business up and running for for the last few days. Hopefully very soon I’ll have a 2nd (similar room) demo to show the new easy transforming abilities on the end-user’s JS side. I’ll also add the most-encountered general quadrics - unit sphere, cylinder, cone, paraboloid, box, disk, and rectangle. I might or might not include the hyperboloid (hour-glass) and the hyperbolic paraboloid (saddle), as we don’t really come across those very often and they tend to be a little more finnicky mathematically when it comes to analytically intersecting with rays. The torus (doughnut or ring) requires a different approach all together when it comes to finding ray intersections because it is quartic (4 solutions max) as opposed to the easier quadric (2 solutions max) shapes listed above that can be found either geometrically or by the famous old quadratic equation. But eventually I’ll add that too because ring shapes come up more often than hyperbolic paraboloids, ha. :smiley:

Will return soon with some new capabilities for our rendering system!

4 Likes