Path-tracing in BabylonJS

It will be my pleasure to do it. Are you sure it won’t be an issue for you @erichlof to use Typescript though? Even if I think it makes the code better, I don’t want it to be slowing you.

This mean we would need to use a compiler then. I am used to have parcel in my projects but we could use something else if you prefer.

The cool thing with parcel is that you can use this package which allow you to load glsl files: parcel-plugin-glsl - npm
This way you can have your shader in separated glsl files which make it easier to update. I use that for another project and it works perfect.

But again this is your call @erichlof, tell us what would be the best for you :wink:

Here’s a draft PR that implements offscreen rendering. I’ve temporarily disabled controls in that mode until I can hack in some input handling for reasons mentioned upthread. When running, add ?renderOffscreen=1 to the URL to do what it says on the box

Do raytrace in offscreen worker by jelster · Pull Request #2 · erichlof/Babylon.js-PathTracing-Renderer (github.com)

HTH

1 Like

Thank you for offering! Mmm… I’m not sure how we should proceed with this modules issue as yes, it would possibly slow me down when trying to add features like 3d transforms and glTF model loading and associated BVH creation. All that latter stuff is pretty complex, even in Javascript! Ha

Maybe we should wait until most of the features are in place and working correctly, and then you and others could start chopping it up and using typescript to separate concerns into individual modules. Does that sound like a good plan going forward?

I don’t have a strong opinion about this either way, but I just can’t contribute much to that effort once it has begun. I am much more comfortable with the path tracing shader engine side of things. :slight_smile:

Let me know how you want to proceed. Thanks!

Cool! Thanks for the draft - although I might wait to merge this until you get the controls working as you mentioned.

Also, about the first PR, do the actual ShadersStore need to have the IncludeShadersStore name on all the pieces of shader code, and not just the file renamed? pinging @Evgeni_Popov about this one. :slight_smile:

I might want to try my own hand at splitting up this shader code because there are certain sections that I need to be seperate and certain sections that should be included together, like some of the routines with the word ‘random’ in their function name. I have an idea of how it’ll look, but I just need to spend some time with it. Sorry, I don’t want to step on anyone’s toes during this process. The only part of this project that I have stronger opinions about are the path tracing shader engine code parts and how they should be organized and accessed. This is just because I have worked with these functions for years and I kind of have a pattern to them that works well and will hopefully be intuitive to others in the end.

Edit: on further looking at @Evgeni_Popov 's earlier ShadersStore examples, I think I like the one that uses ShadersStore and IncludeShadersStore, as opposed to not using them. I need to be thoughtful when chopping all these shader functions into separate pieces and when naming their include access name. If it’s ok with everyone, I will tackle splitting all the shader stuff up so that the end user can just drop the includes wherever they are needed.

For example, our current test demo scene with the red and blue cornell box and white coat sphere on the left and glass sphere on the right could be reduced to:
include commonDefines // all scenes must include these
include randomFunctions // all scenes must have these as well, everything from rng() to randomCosWeightedDirectionInHemisphere()

include calcFresnelReflectance // all scenes must have this for transparent objects
include solveQuadratic // all scenes must have this for any kind of mathematical intersection, such as spheres, cylinders, etc…
include sphereIntersect // most scenes will have this too, but maybe not if the scene strictly uses boxes and/or triangles
Just by using those includes, we will save 100’s of lines on each scene.

I can envision even taking this is a step further, even further than I did with the three.js version, and create separate function pieces for different types of materials. Without getting too much into theory, each surface type has what’s called a BRDF and we must sample that reflection distribution function at every new surface intersection (bounce) of the ray as it travels through the scene. So maybe we could have:
include diffuseSurfaceHandler
include metalSurfaceHandler
include transparentSurfaceHandler
include clearCoatSurfaceHandler
include translucentSurfaceHandler
and then you just stick the ones in that you need for your particular scene materials. This could shorten each demo’s shader by another 100 lines of code.

Please let me know your thoughts on these PRs. Thanks again!

Agree, this is why I wanted to check if you will be confortable with a module/typescript approach before doing anything.
So we should wait until the hard part is done on your side. As you mention there is not that mush BabylonJS Element as this is mostly shaders for now. It will still be easy to switch to module once the job is done.

I have one question though: Isn’t way more easier for you to edit shaders in glsl files rather than in js files? I thought it could be very helpful. Maybe you don’t have much shader work as it is already working in the ThreeJS project?

1 Like

I guess I’m not following what you’re asking here – when the <script> tag with ShadersStoreInclude.js is loaded, the statements in it are executed. The three statements in there define the screenCopyFragmentShader, screenOutputFragmentShader, and pathTracingFragmentShader to be part of the global BABYLON.Effect.ShadersStore – what I think you’re talking about is breaking it down a step further, like in the pathTracingFragmentShader:


// common defines, will eventually be placed in BABYLON.Effect.ShadersStore[]
#define PI               3.14159265358979323
#define TWO_PI           6.28318530717958648

The goal being to make separate includes files for each of those types of reusable pieces of functionality?
@PichouPichou - Regarding TS and modules – Having played with the code and poked around with some refactoring approaches, I think that there’s a bunch of foundational refactoring that will need to happen in order to begin that process, but getting the code to be ES6-friendly without involving webpack or other types of transpilation would be where I would want to start. It’s a tough call because in order to continue to make progress sort of precludes that type of work at the moment, but in order to be able to effectively accept and manage other contributions that kind of thing is necessary!

Ed: that said, if we can get this code into a place where it’s runnable in the PG I think it would really open the doors for community contribution

2 Likes

Thank you @PichouPichou , if we’re all in agreement at this point, I think that your plan is a good plan moving forward. Although the regular Babylon.js scene is pretty much non-existent at this stage and we could trivially do it, at some point I will have to inject my BVH builder which is in .js and deals with the raw triangles of the glTFs. The builder and acceleration structure packing for data texture on the GPU was the most complex code I ever had to write in my life, lol! Luckily I learned from some scattered resources out there on the web and was able to pull everything together under one system for webgl, (with all its limitations compared to usual native GPU and OpenGL code). I would rather get this all working in our project first, then feel free to decide how best to separate the concerns and the best way to use any modules that you want.

About glsl shaders vs. strings of text saved in .js files, it used to matter more that I worked on an actual glsl file rather than a text string, just because of syntax highlighting in the code editor. But I found out recently that in VSCode for instance, if you have the glsl language support downloadable extension installed, you can just open up the .js file with text strings and then find the lower right hand button where it sets the language mode for the editor, and when you select ‘glsl’, it converts all the uni-colored text strings to multicolored syntax-highlighted glsl - pretty cool! This feature might have been there all along, but I just stumbled on it like a year ago, ha ha. So in the end, no it doesn’t really matter if we have everything in .js ShadersStore and IncludeShadersStore, because I can work with those directly in my editor as well.

Thank you!

1 Like

Hello @jelster
Yes that extra breaking up into reusable small chunks is what I was wanting to do. In other words, inside the shader for each unique demo/scene, you just include the parts that you need. As it stands now, the path tracing shader for our first simple blue and red Cornell Box with spheres demo is more verbose than it needs to be. I would much rather have the end user be able to just include the pieces that are needed, which will likely vary from one scene to the next.

You actually did what I was going to start to do with the ShadersStore, and I appreciate it. On further examination of @Evgeni_Popov 's comments and demos however, I really would like to go full force with the breaking apart of the monolithic pathtracing shader into small, reusable include pieces. Most of those smaller functions don’t change from scene to scene, so we probably don’t need to duplicate that code all over the place. Hope I clarified my intent. I will gladly work on doing this ‘chunking’, as I am comfortable with all the shader stuff and how the future pieces will fit together.

About your 2nd PR, if everyone else thinks it’s fine to merge, I’ll gladly do so, I was just concerned about the controls. I don’t have much experience using workers and offline rendering, I have always just used gpu shaders. So I didn’t know if this was ok to just merge into the existing project. Maybe others here want to have some input. But at any rate, an offline worker for rendering sounds really cool and would open up different doors if users were not necessarily needing real time path tracing, and instead wanted highest quality settings and a ‘render-while-you-wait’ setup.

Thank you for your contributions!

3 Likes

Oh that’s totally valid about the controls – I’ll update it from Draft status when I’ve got it in there.

Regarding the first PR, that seems like a sensible approach to allow more modular includes – from a larger context of the JS application that uses WebGL2/BJS is where my perspective is coming from. Ideally, we want to introduce changes that make it easier to later on incorporate your BVH Builder and data packing structures (your darn right-handed coordinate system preference is going to bite me more than once, I can already tell…) :smiley:

1 Like

Lol, sorry about that! Actually, I wouldn’t be opposed to changing from my R.H. system to a L.H. one, if that would be preferable to everyone else, especially if that is Babylon.js ’ preference in general and how starting tutorials are introduced in the playground. I would just need to flip some of the z axis values, and change the flight camera slightly, and the hard coded small SetupScene function inside the demo’s path tracing shader. Other than that, if I’m not mistaken, I believe that the path tracing library is handedness-agnostic. :slight_smile:

I’ll wait for some thoughts from you guys on this matter. But if the consensus is left hand, then I’ll start working on that too!

2 Likes

To All,
I went ahead and adopted the Left-Handed coordinate system, which is more in-line with Babylon.js:
updated GitHub repo

Hopefully, this will allow everyone who has been working with Babylon in the past to feel more comfortable in this coordinate space.

It actually wasn’t as bad as I thought to change coordinate systems. I just had to flip some Z coordinates here and there and do some re-winding of front-facing quads (the walls and light of our test scene Cornell box), because the quad vertices are expected to be given in a clockwise order as you’re looking at them from the front-facing side. If this turns out to be confusing and everyone needs to specify counter-clockwise tris and polys (to specify front-facing side), then I can change the pathtracing quad function as well to accept counter-clockwise ordering of vertices.

Now working on breaking up the shader code into reusable include chunks - hopefully will have something to show soon! :slight_smile:

4 Likes

Hello again!

Just added the initial pathtracing library include .js file to the project so now we can just include<> the parts that we need inside each scene’s main pathtracing shader.

This first version of the modular library has the crucial pieces necessary to be able to do path tracing in WebGL 2.0. Soon I will add extra pieces that might not be critical, but provide some extra functionality or give more features (like physical sky sampling with Rayleigh and Mie scattering for outdoor scenes, or the complete quadric geometry intersection library and their CSG functions).

I also did a lot of clean up with the scene itself, apologies for the weird room measurements that were in place before, ha. I think I had those funny dimension numbers (like 552.8) lying around from the classic Cornell Box scene which contains the rotated taller mirror box on the left and rotated shorter white diffuse box on the right. I got those measurements from Cornell university’s old graphics course pages - someone actually measured the exact dimensions (in millimeters? numbers like 552.8?) of the physical original wooden/painted Cornell Box so that when students had to write their required college ray tracers, they could test their rendering against the photograph of the physical box, which I also downloaded. But anyway, these measurements are not intuitive to scene setup, so I just made everything a cube centered at the world origin with a room radius of 50 ‘units’ (doesn’t matter what units you want to think about, centimeters, meters, inches - everything is relative in the scene, including light power). Now professional rendering packages like VRay, Cycles, Arnold, Octane, etc have actual physical light wattage parameters and meters as units so that the lights will be the right output. Our little path tracer is not that sophisticated, using relative ‘units’, but nonetheless it will produce as photorealistic of an image as anything you can pay for! :slight_smile:

I think you’ll agree that this new system of pathtracing includes reduced the complexity and code duplication of the main path tracing shader in the Babylon_Path_Tracing.js file. In the future, this file can be renamed to something that describes the scene or demo a little less vaguely. For instance, you could rename Babylon_Path_Tracing.js to CornellBox_with_Spheres.js or something like that.

Lastly I slightly changed the capitalized constants to have more descriptive and less cryptic names, like DIFF to DIFFUSE and REFR to TRANSPARENT, and SPEC to METAL. Again, more relics from my first port of Kevin Beason’s smallpt which had to fit a path tracer into 99 lines of C++ code, ha!

I’ll keep working on the pathtracing include library and adding more features. In the meantime, please take a look at the new flow of the demo’s source code in the main js file, and if you spot anything cryptic still, or something that could be reorganized to be easier to use for non-pathtracing-minded users out there who just want to get something together quickly and see all the pretty pictures, let me know!

-Erich

5 Likes

https://playground.babylonjs.com/#XPFJ55#82

For my setup Ive still been struggling with the noise distribution. Any idea how I could fix that?

I like this setup because I was able to easily attach physics to each of the balls and have that controlling the scene. Just wondering if you have any advice.

Hello @Pryme8 !
I have to go into work atm but when I return home I will be happy to take a detailed look into your PG example and see if I can figure out what’s going on with the noise.

Talk to you soon!

Thanks for accommodating all of us “right-minded” folk :slight_smile: :smiley:

I’ve been a bit busy the past couple days, but I’m hoping to get the offscreen controls hooked up in my PR tonight or tomorrow. Thanks for doing this!

1 Like

@Pryme8

I took a close look at your random number generation scheme (which directly relates to the unwanted clumping or patterns inside the noise) and it appears that the unwanted side-effects are stemming from the noise function that you are using from glslsandbox.

I put iq’s (of ShaderToy fame, works for Pixar) beautiful rng() inside your shader and all the unwanted patterns and clumping seems to have disappeared:

your PG updated with a different rng()

However, if you do happen to notice any patterns, it is because I hacked together a seed generator based on the ‘time’ uniform. I didn’t quite know how to add my own uniform called uFrameCounter to your system, which helps produce perfect seed generation with the following shader code line:

seed = uvec2(uFrameCounter, uFrameCounter + 1.0) * uvec2(gl_FragCoord);

All you need to do is hook up this new uniform inside your .js code, then where you update all your uniforms each animation frame, just stick in:
uFrameCounter += 1.0;

note: If you must leave the simulation running for hours and hours, then maybe modulo the uFrameCounter with something like:
uFrameCounter %= 10000;

which will reset it to 0 every so often (every 10,000 frames to be precise) which is large enough not to notice any repetition, if using on a more traditional static scene with progressive rendering sample accumulation buffer (like we have so far here on our project).
You might be able to get away with a much smaller modulo on a dynamic scene like you currently have in place - the noise moves so fast, it’s hard to see slight repetitions.

If you’re not tied too much to the glslsandbox function you were using, I highly recommend going with iq’s random number generator. I have tried a lot of different rng()'s from glslsandbox and shadertoy and his tops them all for speed, compactness (only 4 lines!), and quality of randomness (I have yet to find a visible pattern when the seed is done right!). Random number generation, which controls the smoothness of the visual noise and quality of the random sampling in Monte Carlo path tracers, is truly a black art. I have no idea how one becomes a master at programming these little gems of functions, nor will I ever probably. I wish GPUs came with their own processors for generating random numbers (just like we get on our CPUs), but until that materializes someday, for now we have to rely on shader bit manipulation tricks and black magic to get decent results. :slight_smile:

6 Likes

Heck yeah buddy thank you. I’ll review all this in the morning!

1 Like

All this talk of randomness inspired me to add in my custom method of generating smooth-looking, less-distracting noise in GPU pathtracing shaders. It uses a method of sampling a 256x256 blueNoise rgba texture - a technique inspired by Nvidia’s Path Traced Quake and Quake II RTX real time path traced remakes.

our same Demo, but using BlueNoise texture rand() generator

You’ll hopefully notice that the scene’s surface noise settles down almost instantly (especially on the diffuse room walls), even faster than when using iq’s awesome rng() exclusively. This is because blue noise has smaller, higher-frequency wavelengths, where as the more traditional white noise (like iq’s rng()) has slightly longer frequencies from which our human visual system detects larger patterns (like old black and white TV static).

A disclaimer though: at first when I stuck my new blueNoise routine in on one small part of the renderer, it worked really well - and then I naively thought, “Why not just use it everywhere where the function ‘rng()’ appears? Just replace everything with blueNoise!” Well, at first it looked really smooth and fast, but upon closer inspection and trying ALL of the varied demos on my three.js repo with different lighting situations and different sampling requirements, I found that it did not work perfectly and did not work very well at all in some edge cases - especially the bi-directional pathtracing demos was where it wasn’t quite cutting it.

I mentioned in the previous post how GPU random number generators can seem like black magic. Well, in this case I have to use this blueNoise ‘magic’ sparingly and in the right spots. So over time I have found a good balance between using iq’s beautiful white noise rng() function which Always works in every single situation, but might give slightly more distracting noise and slower convergence, vs. my fast and loose blueNoise sampling algo which makes surfaces clean up almost instantly, is less distracting to look at, especially with dynamic scenes and cameras, but sacrifices true math randomness quality and a fair, equal distribution. From a purely mathematical standpoint, my blueNoise routine probably has some repetitions or bias that I neither can understand nor control, because again - the black art thing, ha. You wouldn’t want to use it on a statistics scientific experiment requiring random number drawing, for example. :slight_smile:

But used in the right places and non-critical and non-Monte Carlo-sensitive areas, the blueNoise random generator is a really powerful addition, in my opinion. It is especially powerful for situations where you can only afford 1 sample per pixel per animation frame, particularly on lower-end laptops and cell phones.

Well, again I got deeper into theory than I was planning on - sorry!, ha. It’s just that I am equally both fascinated and mystified by random number generation, especially on the GPU and Monte Carlo rendering.

More library features coming soon! :slight_smile:
-Erich

3 Likes

It seems smoother indeed. This is again very interesting to see and explore these rendering technics. And I always find it very cool to see how the image stabilizes seconds after seconds.

1 Like

@Necips
Your theory is absolutely sound! :slight_smile:

Yes because of the way our human vision operates, whatever we’re looking at, at the moment, is what is in sharp focus. The rest of our field of view is a little fuzzier. Your post made me think of a technique I was working on when programming my pathtraced game for the browser - The Sentinel: 2nd Look, which is an homage to and remake of Geoff Crammond’s 1986 masterpiece, The Sentinel. I tried to keep the same look and feel to his awesome other-worldly graphics, but since we can do real-time ray tracing and path tracing inside the browser, I decided to add some effects that wouldn’t even be possible with today’s AAA rasterized game engines. Namely, real-time double images (no screen space reflection hacks!) on the solar-panel terrain, and real-time correct reflections on the metal mirror sphere ( no cube maps!) the player uses as a 3d cursor for selection, and lastly, directly pertaining to your earlier comment, a real-time depth of field that updates every animation frame and exactly focuses on the metal mirror sphere, which is hopefully where the player’s eye focus remains through most of the game.

The Sentinel: 2nd Look

note: after clicking anywhere to capture mouse, press Spacebar to cycle through terrain generator, then ENTER to enter your player robot.

This is a W.I.P. so the gameplay is not functional yet, but anyway when you’re inside the player’s robot, drag the mouse around and notice how the mirror sphere remains in sharp focus, no matter how close the sphere is to you or how far back in the distance it is on the large terrain. If you look away from the sphere, the illusion is broken, because I as the lowly programmer cannot predict where every player’s eye sockets are going to be rotated, ha! But seriously, while playing if I keep looking at the sphere, I can’t really tell that the other parts of the picture are slightly out of focus, unless I consciously look at them of course. It all just looks ‘normal’, similar to how I see the world around me.

When I was playing around with this nifty feature, I thought as you do that it would be advantageous if we could spend the bulk of the time and precious samples on what really matters - where the user is looking. Unfortunately I am not that sophisticated nor do we possess reliable technology in 2021 to be able to accurately and cheaply exactly track which pixels the player/user is focusing on every split second.

But you’re right, if we could somehow track this eye motion in the near future (I’m positive our technology will get there relatively soon), then graphics techniques such as real time ray tracing and real time path tracing could really benefit. I’m not too sure about traditional rasterized graphics and games though, if they could benefit. In traditional/ubiquitous rasterized 3d graphics, the image is geometry-centric or geometry bound - namely the game or app has to loop through all the scene geometry that is in the camera’s current view. The final pixels don’t really care where the player is looking, it’s still pretty much the same amount of work to process the camera’s view before-hand. But in ray/pathtracing, we are pixel-centric or pixel-bound. We must loop over all the pixels first on traditional CPU tracers (on GPUs, query each individual pixel in GPU warps in parallel) and then later ask what geometry we need to process. In this rendering scheme, pointing the camera slightly higher at the sky or background can give you a solid 60 FPS even on mobile, whereas if you point the camera at a more graphically complex / more randomly divergent surface (like crazy terrain or water waves), it might drop down to 30 FPS or worse! So I believe it would make a big difference if someday we could, like you suggest, spend more calculations on the handful of pixels that the user is actually looking at, and then only do the minimum amount of work (or utilize all the non-mathematically-sound, non-perfect sampling cheats) on the vast majority of the image where the user is NOT focusing on anyway, then we could greatly speed up the GPU calculations - all the while not degrading the ‘perceived’ final quality of the image as the end-user navigates the dynamic scene.

4 Likes