Path-tracing in BabylonJS

I’ve been trying to combine more then one pass per frame, is that possible?

Instead of a ping-pong. Like a cascade then hand off?

Sorry to Shanghai this thread…

But when I was dabbling in this I kept wanting to each frame do the same scene like 3-6 times even more in some cases in a single frame but with different seeds and then combine them into a single image. But I feel like it was just tricking me and was not really doing it per frame.

The basic idea was to sacrifice some fps for a less noisy real-time image. There was some talk on this with the other path tracing thread but it seems like this is the new hotness.

Also @PichouPichou have you dabbled in any Hybrid Rendering? Where there is a geometric scene and then some path-tracing done on top?

Success! Thanks to @Evgeni_Popov 's awesome help and guidance, I was able to get the pixel accumulation (history) buffer working. Now when the camera is still, our Babylon.js PathTracing Renderer continually samples the stationary scene over and over again, all the while averaging the results. After just a couple of seconds, it converges on a noise-free photo-realistic result - yay!


Evgeni, you will notice that I kind of went back to my old accumulation buffer method that I had implemented in three.js. Although your ping-pong style buffer was a cool solution, in the end I didn’t want to have to change all my shader code at the bottom of the pathTracing main() function that already was proven to work so smoothly. Also inside the tight Babylon.js engine render loop, I would have had to change a lot of my variables and uniform updating system, which again, I had already reliably solved years ago. Even though I didn’t end up doing it the same way as your helpful PG, I couldn’t have done it without your immense help! Thank you!

Please don’t freak out when you see that the length of the main Babylon_Path_Tracing.js file went from 88 lines to 973 lines, lol! This is simply because I couldn’t quite load in all the shaders from the ‘shaders’ folder (which is now removed from the project repo), so I had to stick the shaders in Babylon’s ShadersStore at the top of the main .js file. My custom path tracing utility functions take the bulk of the file space, but if you scroll down, the Babylon.js setup part is under 200 lines of code! :wink:

Good to see you again! I haven’t forgotten our old multi-sample discussions. Since this project is just getting off the ground, and we’re not too entrenched with the path tracing sampling part, I could envision making a parallel shader with its own main() function that allows the user to specify (through a simple uniform) how many times they want the pathTracing sample loop to run before exiting the main() function and passing off to the screenCopy shader. On my humble laptop, I can only afford 1 sample per frame, but if you have an RTX or similar, you could do 12-20 samples per pixel and get a near-noise-free result on each frame at 60 FPS!

To all, now that the sampling accumulation buffer is in place, we can start adding more cool features to the renderer. First up on my end will be adding full camera flight and keyboard control with additional focusDistance, aperture, and FOV parameters, all interactive in real-time!



You can simply execute multiple times the render code:

I’m at 45fps when doing the loop 20 times.


Great work!

You can put this shader code into another .js file and include it in your main.js file :wink:

Also, you don’t need to put it in the ShadersStore object as you are simply passing the shader code to the effect wrapper.


@Evgeni_Popov @erichlof @Pryme8
Super happy to watch you all progressing on this topic
just a message to show some appreciation
looking forward to seeing this develop


I’m absolutely fascinated and excited by this entire effort and thread. Amazing work by the RT^2 Triumvirate (real-time ray-trace)!

Someone mentioned upthread about vsync capping frame rates, what if you didn’t have to contend with that? I skimmed the BJS setup code in the repos, it doesn’t look like there’s anything to prevent use of offscreen canvas, and it’s pretty easy to set up.

Support for offscreen canvas has the video walking through it. I’ll see if I can play around with this at some point today as well if this seems promising


hmmm I tired that and thought it was not working. Maybe it was! Thank you.


Thank you for the separate .js file suggestion to hold all the shader pathtracing library/utility functions. I suspected there was a way to ‘include’ the desired parts without having to pollute the main Babylon.js setup file for every single demo/scene.

When you mentioned not using ShadersStore, how would that look exactly? What I want in the end is to place all the utilities like calcFresnelReflectance, solveQuadratic, sphereIntersect, stuff like that which is used on nearly every demo/scene, into a large 2000 line .js file and ‘pull’ from that inside each demo’s main .js file, which will be small in comparison. Also, I may have to use ShadersStore as I need a way to “include<>” inside each demo’s dedicated shader that does the geometry intersection and light ray optics handling for that particular scene.

If you take a look at the three.js-PathTracing-Renderer, it’s named PathTracingCommon.js and each demo does “include” from that store inside their own pathtracing shader. This makes each unique dedicated shader for each particular demo much shorter in length, because they can just call #include<sphere_intersect> for example.

You don’t have to do the whole library, but a demonstration of how to save a small pathtracing shader utility function in a separate .js file, how to load it from the main pathtracing effectWrapper’s pathtracing shader would be super helpful, as I am a Babylon.js newbie! :slight_smile:

Edit: Once I learn how to store and pull in a small utility function saved in its own .js file, I will work on getting my whole pathtracing library saved in separate functions (nearly 3000 lines and growing!) so each demo/scene can just pull from that as needed.

Thanks again!


Actually, I found it very interesting to really understand how you get from a very noisy image to a very pixel-perfect one! Thanks for the lesson really :wink: Not a raytracing expert yet but slowly starting to understand the mechanism.

@Evgeni_Popov I want to thank you also for the support.
BTW this effect is so cool

It seems to have been very crucial to make it work.
And what a result @erichlof, the first raytracing scene ever made with BabylonJS! So damn cool!

No never did that. The best solution I found and use at Naker in order to have the best rendering quality is to push every scene parameters (hardwarescaling, shadow map size, antialiasing samples, etc) and render one frame with scissor enabled to prevent the browser from freezing when rendering the frame (see: Slow down rendering of one frame - #3 by PichouPichou)

Indeed I think it will be easy to set up. But the hard part with offscreen canvas is that you have to re-implement every interaction like camera drag and drop for instance as everything works in the worker. I have some code I will be happy to share which manage that if you want.

Haha I don’t, I know this issue with shader code. I think it is a shame we can’t have shaders in glsl file to make it way more readable by the way.

Why not using Typescript and do simples import of separated files using specific shaders?
It will make the code more readable and it will allow us to use ES6 BabylonJS modules in order to import only what we need instead of the entire BabylonJS Library: Babylon.js ES6 support with Tree Shaking | Babylon.js Documentation

I will be happy to manage this part if you want :wink:



Indeed, but what if you instead were to use the offscreen canvases as a replacement for the current setup used to gain access to the full pixel buffer, or similar pipeline phase? No more ortho cameras staring at billboards, just a fresh buffer of color data!

Re-reading the OSC thread, I noticed you asked (and were answered in the affirmative) this interesting nugget:

If we have several canvas on one page, does that mean that each canvas will have its own worker and thus its own thread?

I’m barely following the technical details in the discussion, but it does seem to me that you could use this as a way to take the time to compute ultra high quality samples that could be blended into the “historical buffer” every 3-6 frames or so. All the data transfers are effectively async to the main render thread, so absent the likelihood of my getting something fundamental incorrect, this would seem to allow for us to have :cake: better render quality while at the same time (not!) eating into the overall FPS of the user-facing scene?


Another milestone reached! Just added user camera flight control for 1st-person scene navigation with WASD,QE keys. Also, I added realtime control of the following physical camera parameters:

Mouse wheel controls camera FOV for quickly zooming in and out.
< and >(comma and period keys) control the camera Aperture size for a realistic depth-of-field effect.
– and + (dash and equals keys) move the scene focal point back and forth, giving further depth-of-field effect control.

New Camera Features

It’s so enjoyable to be able to fly around the scene and set up different camera views, FOV, and depth-of-field for a unique look! And thanks to our new accumulation buffer, it handles the camera changes seamlessly in real time!

Enjoy! More features on the way :slight_smile:


The Effect.ShadersStore is simply a repository where you can put your source code and refer to it afterwards through the key used to store the code. But you can also directly pass the shader code to the EffectWrapper, both ways work. Using #include<...> does work in both cases, you simply need to put your include code in Effect.IncludeShadersStore.

Example of using a #include and putting the source code in the ShadersStore:

Example of using a #include without putting the source code in the ShadersStore:

It is using a ShaderMaterial for the demonstration but it works the same way with a EffectWrapper. Also, you should put the BABYLON.Effect.ShadersStore["xxx"]="yyy" in another .js file and inject this file in your html file.

Using the ShadersStore is a bit better than directly passing the source code because only the key is used to look up for an existing Effect that would match the code, whereas all the source code is used in the other case.

1 Like

Thanks so much! Now I think I understand the differences. I will get to work on creating a separate .js ShadersStore path tracing utility library that can be used with “includes<>” wherever we need them.

Will post to the github repo once I get a small first part of it loading in correctly, then I’ll just keep adding to this ever-growing library .js file as we move forward.

Have a good weekend!

@erichlof - I’ve prepared a PR that extracts the shaderstore code from the main JS file – do you want me to submit it to your repos or from a fork?

ed: all it does is pull the shader declarations into their own file and add a script tag to the HTML doc – can’t use modules in this context

1 Like

To all,

As you saw with my reply to @Evgeni_Popov , I will begin work on creating a large separate .js file that we can pull bits and pieces from as needed in each scene’s dedicated shader. This will hopefully greatly reduce the shader size for each different demo/scene that you want to create.

Speaking of this modular approach, @PichouPichou had suggested earlier that we could modularize some of the .js setup code so that the whole Babylon.js library would not have to be included in every small demo/scene, and that we could just pull in relevant parts of the Babylon.js library that we needed in order to set up the almost non-existent main Babylon scene (all the heavy lifting and geometry processing will be done in the path tracing shader library parts).

Although I think that this would be a neat idea, I’m afraid I am not the one to tackle this job, as I have never used typescript, and I am still somewhat of a newbie to Babylon.js and its inner workings, so far as modules are concerned. In the past, even with three.js (the library I am much more knowledgeable in), I have naively just included the whole three.min.js file in my HTML script tags, without ever trying to load in modules in the more modern style of front-end web programming.

That being said, if anyone feels comfortable enough with Babylon to take my current .js setup/init part and somehow modularize it, please have at it! The good news is that, as I previously mentioned, the main usual Babylon scene is almost non-existent at this stage, with just a couple of effect wrappers and a camera - that’s it, literally! Now in the future when we want objects in the scene to be able to be transformed, (position, rotation, scale), like the 2 boxes in the famous Cornell Box scene, I will use Babylon.js’ built-in Object3D(at least that’s what it is called in three.js, but I’m sure there is an equivalent), to use as a transform place holder, kind of like a 3d editor gizmo, to feed into the path tracer, which will take care of transforming all the objects for us automatically by first transforming the intersection rays by the objects’ inverse matrices (ray tracing’s greatest secret trick of all time!). So I will eventually need a way to init and specify different Babylon transform matrices place holders for every different object that is not a simple sphere.

In the near future, I need to load in a texture, an RGBABlueNoise texture to be precise (to use in smoothing out random noise), as well as any material textures (like the wood surfaces in my billiard table demo). But I’m sure that will be a trivial matter with Babylon, as it is with Three.

Much farther down the line, when we want to start tracing glTF triangular models, I will have to pull in Babylon’s glTF loader, which could be another module as needed. I will also have to pull in my custom .js BVH acceleration structure builder, that sorts the complete arbitrary list of the glTFs triangles and creates a compact data texture containing an efficient bounding box heirarchy for the GPU to chew on when tracing the scene. But we can cross that bridge later.:slight_smile:

So in summary, the amount of Babylon.js setup code for each main scene (as long as it doesn’t contain glTF models) is pretty much complete and what you see now on the repo. The other details come when we want to have simple object transforms and when we want to load in triangular models eventually.

That’s great! You can just submit it as a PR to the main repo if you’d like. This weekend is a little hectic for me, tomorrow being mother’s day in the U.S. and all, so I might need 2 or 3 days to look at it.

But regardless, thank you for your contribution!

1 Like

The modularization and loading stuff is pretty straight-forward, once you’ve banged your head against a wall enough times :smiley:

I figured this was a good way to start chipping away – just saw your response to earlier thread about which repos to submit to (whoops!)

Extract shaderstore into separate file by jelster · Pull Request #1 · erichlof/Babylon.js-PathTracing-Renderer (

1 Like

Wasn’t aware that you wrote this earlier, but :white_check_mark:

1 Like

Everything looks great at a quick first glance. As you just mentioned, could you maybe change the name to ShadersStoreInclude? Each demo/scene requires its own small dedicated path tracing shader. But inside these individual dedicated shaders, I wanted them to be able to call include<> and pull relevant parts from the large .js file. This will keep the required dedicated shaders small in size.

Thanks again!

1 Like

Gotcha - no problem! I’ll push up a commit with the file rename shortly. Because the shader source is being assigned to the global (static) BABYLON.Effect.ShadersStore key, each individual scene created in the same browser window (closure, etc) should still be able to access each individual shader