Path-tracing in BabylonJS

You are correct @JohnK, thank you!

Yes @PichouPichou, like JohnK said, elapsedTime can be acquired by calling every animation frame.

let elapsedTime;

…and inside the tight .js animation loop,…

elapsedTime = * 0.001;

elapsedTime (via will return an ever-increasing accurate timer/counter in milliseconds, so that is why you have to multiply it by 0.001 or divide by 1000 to go from milliseconds to seconds. If I’m not mistaken, I believe it counts upwards from when the webpage was first navigated to (which would be time 0.0000…).

One additional thing to keep in mind is that if the path tracing shader is depending on this elapsedTime to drive uv texture movement or spin a spinning object over and over again, the accuracy becomes poor once the elapsedTime number gets too large. So to combat this, I put an additional modulo operation on elapsedTime, like so:
elapsedTime = ( * 0.001) % 1000;

Once the app has been running for a 1000 seconds, it starts over at 0.0. This is an arbitrary number - but it should be small enough to keep elapsedTime to a reasonable floating point number in seconds when it is used in math operations in the shader, and large enough so that the user will likely not see it reset to 0 (which might be jarring if it is controlling wave motion or smooth angle). I suppose the number could be a multiple of 2Pi that is somewhere around 1000 so that if it indeed was used for angular velocity, it would smoothly reset/repeat. But I don’t know if that is too important.

Thanks for all your work! :slight_smile:

1 Like

Have done a Typescript version of Eric’s BVH_Acc_Structure_Iterative_Builder along with a PR to GitHub - nakerwave/BabylonJS-PathTracing-Renderer . Not at all that familiar with Typescript so it will need improving and checking. Hope it is helpful.


Hello everyone!

Now that people are working on porting my renderer over to the Babylon.js engine, I was thinking that this would be a good time and place to give sort of a general overview of how ray/path tracing works inside the Webgl2 system and the browser. That way, all who are interested, but who might not be able to contribute to the source, might benefit from the details and approaches that I took.

Warning: sorry, this will be an epic-length, multi-part post! I hope everyone is ok with that! :slight_smile:

Part 1 - Background

I’ll begin with some of the differences between traditional rendering and ray tracing. Traditional 3d polygon rendering makes up probably 99.9% of real-time games and graphics, no matter what system or platform you’re on. So if you’re using DirectX, openGL, WebGL2, software rasterizer, etc., you probably will do it/see it done this way. In a nutshell, vertices of 3d triangles are created on the host side (Javascript, C++, etc.) and sent to the graphics card via a vertex shader. The graphics card (from here on referred to as ‘GPU’), takes the input triangle vertex info and usually performs some kind of matrix transformation (perspective camera, etc.) on them to project the 3d vertices to 2d vertices on your screen. They are then fed to the fragment shader which rasterizes (fills in or draws) any flattened triangles’ pixels which are contained inside them.

The major benefit in doing it this way for several decades now, is that as GPUs got more widespread, they were built for this very technique and are now mind-numbingly fast at rendering millions of triangles on the screen at 60fps!

One of the major drawbacks to this seemingly no-brainer fast approach is that in the process of projecting the 3d scene onto the flat screen as small triangles, you lose global scene information that might be just outside the camera’s view. Most importantly, you lose lighting and shadow info from surrounding triangles that didn’t make the cut to get projected to the user’s final view, and there is no way of retrieving that info once the GPU is done with its work. You would have to do an exhaustive per-triangle look-up of the entire scene, which would make the game/graphics screech to a halt. Therefore, many techniques have been employed and much research has been done to make it seem to the user that none of the global Illumination and global scene info had been lost. Shadow maps, screen-space reflections, screen-space ambient occlusion, reflection probes, light maps, spherical harmonics, cube maps, etc. have all been used to try and capture the global scene info that is lost if you choose to render this way. Keep in mind that all these techniques are ultimately ‘fake’ and are sort of ‘hacks’ that are subject to error and require a lot of tweaking to get to look good for even one scene/game - then everything might not look right for the next title and the process begins again. Optical phenomena such as true global reflections, refractions through water and glass, caustics, transparency, become very difficult and even impossible with this approach. BTW this is known as a scene-centric or geometry-centric approach. In other words, vertex info and scene geometry take the front seat, global illumination and per pixel effects take a back seat.

Now another completely different rendering approach has been around the CG world for nearly as long - ray tracing. If the rasterizing approach was geometry-centric, then this ray tracing approach can be classified as pixel-centric. In this scenario on a traditional CPU, the renderer looks at 1 pixel at a time and deals with that pixel and only that pixel, then when it is done, it moves over to the next pixel, and the next, etc. When it is done with the bottom right pixel on your screen, the rendering is complete. Taking the first pixel, rather than giving it a vertex to possibly fill in or rasterize, the renderer asks the pixel, “What color are you supposed to be?”. So in this pixel-centric approach, the pixel color/lighting takes a front seat, and then the scene geometry is queried once it is required. The pixel answers, “Well let me shoot out a straight ray originating from the camera through me (that pixel of the screen viewing plane) out into the world and I’ll let you know!” Let’s say it runs into a bright light or the sun, great - a bright white or yellowish white color is recorded and we’re done! - well with that one pixel. Say the next pixel’s view ray hits a red ball. Ok, we have a choice - we can either record full red and move on to the next pixel and thus having a boring non-lit scene, or we could keep querying further for more info: like, “now that we’re on the ball, can you see the sun/light source from this part of the red ball?” If so we’ll make you bright red and move on, if you’re line of sight to the light is blocked, we’ll make you a darker red. How do we find this answer? We must spawn another ray out of the red ball’s surface, aim it at the light, and again see what the ray hits. If we turn the red ball into a mirror ball, a ray must be sent out according to the simple optical reflection formula, and then see what the reflected ray hits. If the ball was instead made out of glass, at least 2 more rays would need to be sent, one moving into the glass sphere, and one emerging from the back of the sphere out into the world behind it, whatever might be there. Although more complex, there are well known optical refraction formulas to aid us. But the main point is that with this rendering approach, the pixels are asked what they see along their rays, which might involve a complex path of many rays for one pixel, until either a ray escapes the scene, hits a light source, or a maximum number of reflections is reached (think of a room of mirrors - it would crash any computer if not stopped at some point). From the very first ray that comes from the camera, it has to query the entire scene to find out what it runs into and ‘sees’. Then, if it must continue on a reflected/refracted/shadow-query path, it must at each juncture query the entire scene again! No hacks or assumptions can be made. The whole global scene info must be made available at every single turn in the ray’s life.

As far as we know, this is how light behaves in the real world. Although a major difference is that everything in the real world happens in reverse: the rays spawn from the light source, out into the scene, reflecting/refracting, and only a tiny fraction is able to successfully ‘find’ and enter that first pixel in your camera’s sensor. However, this is a computational nightmare because the vast majority of light rays are ‘wasted’ - they don’t happen to enter that exact location of your camera’s single pixel. So we do it in reverse: loop through every viewing plane pixel (which is infinitely more efficient computationally) and instead send a ray out into the world hoping that it’ll eventually hit objects and ultimately a light source. Therefore the field of ray tracing is actually ‘backwards’ ray tracing.

The benefits of rendering this way are that, since we are synthesizing the physics of light rays and optics, images can be generated that are photo-realistic. Sometimes it is impossible to tell if an image is from a camera or a ray/path tracer. Since at each juncture of the ray’s path, the entire ‘global’ scene must be made available and interacted with, this technique is often referred to as global Illumination. All hacks and approximations disappear, and once difficult or impossible effects such as true reflections/refractions, area-light soft shadows, transparency, etc become easy, efficient, and nearly automatic. And these effects remain photo-realistic. No errors or tweaks on a per-scene basis necessary.

This all sounds great, but the biggest drawback to rendering this way is speed. Since we must query each pixel, and info cannot be shared between pixels (it is a truly parallel operation with individual ray paths for each individual pixel), we must wait for every pixel to run through the entire scene, making all the calculations, just to return a single color for 1 pixel out of a possible 1080p monitor. This is CPU style, by the way - the GPU hasn’t entered, yet. Many famous renderers and movies have been made using this approach, coded in C or C++, and run traditionally on the CPU only.

Then fairly recently, maybe 15 years ago, when programmable shaders for GPUs became a more widespread option, graphics programmers started to use GPUs to do some of the heavy lifting in ray tracing. Since it is an inherently parallel operation, and GPUs are built with parallelism in mind, the speed of rendering has gone way up. Renderings for movies that once took days, now take hours or even minutes. And in the cutting edge graphics world, ray tracing reflections and shadows (and to lesser extent, path tracing with true global Illumination), has even become real time at 30-60 fps!

My inspiration for starting the path tracing renderer was being fascinated by older Brigade 1 and Brigade 2 YouTube videos. Search for the Sam Lapere channel, who has 173 insane historic videos. He and his team (which included Jacco Bikker at one point) were able to make 10-15 year old computers approach real time path tracing. Also I was inspired by Kevin Beason’s smallPT: that he could fit a non real-time CPU renderer complete with global Illumination, into just 100 lines of C++! I used his simple but effective code to start with and moved it to the GPU with the help of Webgl and the browser. In the next part, I’ll get into details of how I had to set up everything in Webgl (with the help of a Javascript library) in order to get my first image on the screen inside of a browser.

So in conclusion, this was a really long way of saying that when you choose to go down this road, you are going against the grain, because GPUs were meant to rasterize as many triangles as possible in a traditional projection scheme. However, with a lot of help of some brilliant shader coders / ray tracing legends of yesteryear out there, and a decent acceleration structure, the dream of real time path tracing can be made into reality. And if you’re willing to work around the speed obstacles, you can have photo-realistic images (and even real time renderings) inside any commodity device with a common GPU and a browser! Part 2 will be coming soon! :slight_smile:


Hello again! Here’s the next part as promised:

Part 1A - Hybrid Rendering Overview

Before I go into the implementation details, I forgot to mention in the last post that if you don’t want to do either pure traditional rasterization, or pure ray tracing (like the path I took), you can actually use elements of both seemingly incompatible rendering approaches and do a hybrid renderer. In fact, if you’ve been keeping up to date with all the latest Nvidia RTX ray tracing demos and games that use these new graphics cards’ ray tracing abilities, then you’ve already seen this in action!

What the current 2020 RTX-enabled ray tracing apps are doing is a blend of rasterization and ray tracing. Remember I mentioned in the previous post that if you do pure ray tracing from scratch like we are doing, you have to sort of go against the grain because older graphics cards weren’t really designed with that in mind. Their main purpose is to rasterize polygons to the screen as fast as possible. Knowing this, Nvidia chose to roll out ray tracing to the masses using the already well-developed triangle rasterization of its own cards, and blast the scene to the screen on the first pass at 60 fps (which is totally reasonable nowadays, no matter the complexity).

After the triangles are traditionally projected (mostly unlit) to the screen, the specialized ray tracing shaders kick in to provide true reflections, refractions, and pixel perfect shadows without the need for shadow maps. The advantages of doing things this way are that on the first pass, you are working in line with the graphics card at what it does best, and only those triangles that made the cut to the camera’s view need be dealt with when later ray tracing. In other words, blank pixels that will ultimately be the sky or background, or triangles that developers choose not to have any ray tracing effects done on them, can be skipped and thus the expensive ray tracing calculations for those pixels can be avoided entirely.

This is almost the best of both worlds when you first hear about it/ see it. But in actuality, you are 'kicking the can down the road" and will eventually have to deal with ray tracing’s speed bottlenecks at some point. Once the polys are blasted to the screen, great - but now you must make the entire scene geometry with all its lights and millions of triangles, some of which may be just offscreen, available to each of the millions of incoherent rays at every twist and turn of their various paths into their surrounding environment. This part is just as expensive as our pure ray tracer. Unless you have some kind of acceleration structure like a good BVH, the game/app will grind to a halt while waiting on the ray traced color and lighting info to come back for the pixels that cover the scene’s rasterized triangles. Another disadvantage is that in order to use the GPU this way, all your scene geometry must be defined in the traditional way: as triangles. You can’t really do pure pixel perfect mathematical shapes like my geometry demos, or realistic camera depth of field, and you can’t do ray marching like clouds and water and other outdoor fractal shapes, unless they are all triangulated, which could add stress to the already busy BVH.

Nvidia has done a good job at providing tools and hardware inside their new GPUs to make the bottlenecked ray tracing part go faster. They have matrix processors for doing many repetitive math operations needed for ray tracing. They also have processors devoted only to maintaining a dynamic BVH every animation frame. The scene and characters can even move inside the BVH - that part is pretty amazing. Also they have AI-trained noise reduction software to aid in smoothing out the soft shadows and glossy reflections. That part is a whole other level - I don’t even know how they do that magic.

So with all that background, I chose to do pure ray/path tracing. I chose to give up the quick rasterization pass that Nvidia and the like enjoy, but with the advantage that I can manipulate the very first rays cast out into the scene from the camera. True depth of field, pixel accurate pure math shapes, no need for switching shaders half way through the animation frame, and the ability to do outdoor fractal non-triangular shapes, all become possible, and in some cases, even trivial (like our DoF in 6 lines!).

One last note before moving on to our implementation overview in the next post: I have been using the word ray tracing and not path tracing when referring to Nvidia’s initial RTX demos and the AAA games that have started to use their technology. Most of those real time renderings are doing specular ray tracing only, as kind of a bonus feature: like mirrors, metal, glass, and puddle reflections and shadows. But that is not true full global Illumination like we are attempting. The closest demos to reaching this goal while using RTX that I can think of are the recent Minecraft RTX demos and the slightly older Quake Path Traced demos. Minecraft was a good test bed and starting point for real time GI because ray-box intersection routines are very efficient and that is what makes up nearly all of Minecraft geometry, which exactly matches the BVH bounding boxes. Quake was another good first test of game path tracing because the polycount is low and the environments are less detailed than more modern titles. Still, both are very popular on YouTube and both show promise for the future, when we will have movie quality lighting and GI moving at 30 to 60 fps!

I promise to actually get to implementation details on the next post - sorry for the tangents! :slight_smile: Part 2 coming soon!


Hello once again!

I just wanted to let everyone know that I am copying my earlier posts here and moving the background/theory/implementation discussions over to a new topic in a more appropriate forum, the Demos and Projects section. Here’s the link: Path Tracing with Babylon, Background and Implementation

I realized that my posts were becoming longer and longer and I didn’t want it to seem like I was hijacking this thread. But of course, I will return here to post if there are any new discussions about the current port that a couple of people are already working on.

To @Deltakosh , I don’t know if it’s possible but when I successfully move everything over to my new Demos and Projects topic, if you wanted to remove the original background posts on this thread so this discussion can remain more focused on the port and on-topic. I will be fine with whatever you decide.

Thank you, and see you all soon! :slight_smile:



Hi everyone,

I am back to make some progress here. I was pretty busy with other stuff and it is sometimes hard to make room for other cool project!

@erichlof I agree this will be better to have all shaders managed in GLSL file and this is totally doable with BabylonJS.

@JohnK this is great!! I am not familiar with BHV, can you just explain how it will help porting @erichlof works to BabylonJS ? :wink:

Then I think it will indeed be better @erichlof to keep the discussion here about porting path tracing to BabylonJS. Even if this is utterly interesting to understand how the path tracing is exactly working, it will be more effective to have it in another thread like you just did. :ok_hand:


Ok so I made the changes based on all your comments. Thanks again for helping @JohnK and @Evgeni_Popov :slight_smile:

@erichlof what is the exact list of uniforms the shader needs? I don’t know how it works in ThreeJS, but in BabylonJS we have to put an array of strings matching with uniforms so that we can modify their value afterwards as you can see here.
Based on your shader code I made this array but not sure this is correct.
Plus I have a question for the BabylonJS team as I have attributes by default in my ShaderMaterial option

Then I am wondering what are the next steps? Starting the first demo and try to make it works :grin:

1 Like


Hello, that’s great news about your progress! If you don’t mind, let me look closely into your shader-uniform questions. I promise to get back to you sometime tonight (like in 7 hours). I’m away from my computer right now, but I’ll have time to gather the necessary info soon.

About the next step, yes I agree that we should just try to get the first geometry showcase demo working, as that uses all the core elements that most of the other demos use. The BVH demos can be down the road, because that is much more complex. But I have confidence we’ll get that working too! :smiley:

1 Like

Ok I had time to look at your code and most everything looks great! 2 uniforms out of your proposed list do not necessarily need to be there: ‘uCameraJustStartedMoving’ and ‘uRandomVector’ . I think a while back I was able to remove uCameraJustStartedMoving from the GLSL files altogether and just keep all that inside the js files. However, ‘uCameraIsMoving’ is vital to each fragment shader. That one does indeed appear in all the demos.
‘uRandomVector’ was from older days when I was trying to randomize the seed inside the shader and I hadn’t yet found the great rand(uvec2) GLSL utility function by the great graphics coder, iq (of ShaderToy and Pixar fame). So unless the shader uses the uRandomVector (which I don’t think any do anymore), then you can safely remove that one from the list as well.

One thing to keep in mind is that every demo has its own unique fragment shader (where the path tracing happens) and therefore, from demo to demo, the uniform list could change slightly (but only that demo’s particular fragment shader, usually containing the name of the demo in its title, like Geometry_Showcase_Fragment.glsl). For instance, the Geometry demo has a torus matrix that must be calculated in the js setup file and then fed to the shader via matrix uniform. The Cornell Box demo on the other hand, has 2 boxes with their matrices that must be setup in js and fed to the shader via matrix uniforms. So in other words, all the lists are mostly the same from demo to demo, but there needs to be a way of changing a couple of them around, removing, or adding as needed on a per-demo basis.

One more thing, I may have spotted a copy and paste bug here - Should it instead be something like:
vertexSource: pathTracingShader.vertexShader,
fragmentSource: pathTracingShader.fragmentShader, and
uniforms: pathTracingShader.uniforms ? You currently have it pointing to screenOutputShader. I don’t know, maybe I haven’t fully understood the port yet.

But anyway, great job so far! If you have any more questions, please ask! :smiley:


BVH or Bounding Volume Hierarchy is one way of arranging many objects positioned in space (such as the triangles of a mesh) in such a way the ones a ray pass through can be determined in an efficient manner. Looking at Erich’s path tracing repository there are many files related to using a BVH. The one I attempted to port is the one building the BVH given the positions of objects.

Perhaps it is less necessary than I thought so that as Erich suggests

1 Like

Here’s a good resource: Bounding Volume Hierarchies

1 Like

Hello Everyone! So sorry for my exceptionally long pause in posting to this topic. If you have kept up with my three.js path tracing renderer, you will see I have been very busy all these months! I didn’t realize this babylon.js path tracing conversion project had come to a halt. Therefore recently, I have worked on and created a new dedicated GitHub repository for just this purpose! You can check it out at:

Although Valentin (PichouPichou) had already opened his own repo for this effort, I really felt I needed to start from scratch, as a relative beginner to Babylon.js. This was necessary for me to truly understand how the Babylon engine works and how it differs from three.js (which I was more fluent in, historically). So, Valentin - I hope you don’t mind me opening up a similar-named repo parallel to yours, but this way, I could work on the basics of getting a small path tracing demo up and running. Now that I have some of the setup requirements out of the way, maybe you and other Babylon.js users reading this thread can contribute to the project.

When I say I started from scratch, I literally just copied the first basic gray sphere and plane with dark blue background demo from the Babylon Playground and pasted it into my VSCode editor, ha ha! I hit some speedbumps along the way, especially with refreshing uniforms, as it is done slightly differently than three.js, but I eventually overcame most of the hurdles. If you run the demo on my new repo, you’ll notice that it just has the raw, noisy input that never quite settles down and converges. This is because I haven’t yet figured out how to chain multiple post processes together - I will need a ping-pong buffer to do progressive rendering that refines when the camera is still. I imagine I will need a screenCopy pass through postProcess as well as a final screenOutput postProcess where it’s only job is to apply gamma correction and tone mapping to bring the image into the color float (0.0-1.0) range. There are some playground examples of pass-through post processes, but I don’t know if there are ping-pong feedback-style post processes examples. If anyone here could take a look at my super-simple .js setup file and point me in the right direction, I would appreciate it!

Again, sorry for the delay, but now I feel that we at least have a ground base to work from. And speaking of ‘bases’, today is May the 4th (Star Wars day), so, to steal a movie reference: this ‘base is not yet fully operational’, but with your help, hopefully it soon will be! ;-D



Hi @erichlof, This is HUGE!

I am especially impressed by the fact that it works with only 80 lines of BabylonJS code :exploding_head:
I expected it to be more complex even for a basic scene.

Of course no problem at all if it is easier for you to have this new repo.
Just hope the first progress I made in the other repo helped you?

About the ping-pong buffer issue, I think @Deltakosh will know the right person to ping (if this is not himself) :wink:

“Always in motion the future is” :stars:

1 Like

Yes your original repo did help, Valentin! It clued me in to how Babylon.js setup is done, and also inspired me to try and complete the project (even if it is nearly 8 months later! lol).

There were a couple of calls in your port that I did not fully understand, so again that’s why I wanted to start from square 1 and just get the setup basics down.

You’ll see in my next post that after trying to implement the ping pong buffer, I still need help. :slight_smile:

1 Like

Attn: Babylon.js devs and users
A call for help!

Hello again everyone! Although I’m happy that we have a path traced image (noisy, but will be clean when the next stage is complete) quickly rendering at 60 FPS inside the Babylon.js PathTracing Renderer, I have been unsuccessful on my first attempts to implement the necessary ping pong (feedback loop) buffer. I tried reading through every line of the Babylon.js docs/playground when PostProcess is mentioned, and tried copying and pasting your relevant examples of PostProcess setups, but it just doesn’t seem to work. Please let me clarify what we need at this stage of the project, and maybe that will help you to help me! :slight_smile:

Here’s the background and scenario: Path tracing is an inherently noisy affair because we can only randomly choose 1 ray direction out of an infinity of ray directions when the rays bounce off of rough or diffuse surfaces, like the walls of a room. So on each animation frame, to keep the framerate at 60fps, we randomly pick a single diffuse path if the pixel’s ray we’re working on at the moment happens to come in contact with a diffuse surface. Then we follow it wherever it leads us - sometimes the pixel is bright if it happens to hit a light, but sometimes it is black if it escapes into the black void without successfully hitting a light source. Hence all the animated noise on the path traced image on our project at this juncture.

If we had infinite computing power, we could just do an infinite loop and run through all the possible diffuse ray directions above the hemisphere of the diffuse surface in question. That’s kind of what happens in reality, ha. But since we need to quickly sample and move on to the next animation frame in 16 milliseconds, we have to resort to sampling and averaging over multiple frames, which although starts out very noisy, eventually settles in on the correct, converged diffuse color (if we take a good amount of samples over time of course).

When sampling and averaging, you need the previous sample to average with. So what I am needing is a way to first render the random noisy scene at 60fps (check! Already working), save that random noisy sampled image by using a ‘screenCopy’ or ‘pass’ PostProcess (half check. I think I did this, but not entirely sure), and then on the next animation frame, when we’re back inside the first pathtracing shader, we first grab the old saved previous pixel color, then re-pathTrace the scene which gives us a new noisy current pixel value, and then finally do something like previousPixel.rgb + currentPixel.rgb = new averaged pixel. Then once again, the screenCopy shader copies this newly averaged pixel (which keeps getting better and closer to the answer as time advances), and saves it for the next animation frame, where the first pathtracing shader uses it yet again, over and over, until it settles down on the correct result. It’s as if we slowed time way down and were able to gather and sample millions of ray directions and average them together, kind of like our eyes see the surface in realtime!

Just as an example, take a single pixel and do a random coin-flip and either output 1.0(white) or 0.0(black). Let’s say it flips ‘heads’ to 1.0 white on the first animation frame. We save that with a screenCopy shader and run the simulation again. On the next frame however, it randomly flips ‘tails’, which is 0.0 black. Now at this point to lessen the noise, we take the previous recorded pixel, (1.0 white) and then add it to our current random sample which was 0.0 black, then divide by the number of frames, 2. So, 1.0 + 0.0 = 1, divided by 2 = 0.5. We correctly end up with a gray pixel, 0.5, which, for you statistics buffs, is the statistical expected outcome to our experiment. If we do this for every pixel on the screen at the same time, essentially what we have is an old fashioned black and white TV static noise, and after a couple of seconds, it converges on a perfect, uniform smooth gray color across the entire screen.

With my old three.js hacky way, instead of a PostProcess, I created a plane with a shader material (my path tracing shader) and slapped the plane right up on the screen, full screen with an orthographic camera (no perspective projection). I was able to set the RenderTarget for that plane material, which fed into the screenCopy plane (same style plane as the other pathtracing one). Then I set ITS RenderTarget back towards the initial pathtracing plane, creating a feedback loop. Now with Babylon.js, a while back someone mentioned on these forums that this is essentially working like your PostProcess system. That’s why I tried the PostProcess route first, rather than creating a plane mesh with custom shader material and sticking it close to the screen so it’s full screen. If you guys tell me that this special ping pong process cannot be done using PostProcess, then I have no problem reverting to my old hacky way of doing it with planes, as long as I can definitely set the render targets to be each other and not the screen necessarily. Actually once I get step 1 working, there will be a 3rd and final PostProcess (or plane with shader material) called screenOutput. This last PostProcess or plane will be in charge of averaging the millions of samples collected along the way, applying gamma correction, and tone mapping, and showing or rendering to screen. It does not have a render target. This is a separate entity from the other 2 because its gamma corrections and tone mapping would ‘pollute’ the mathematically perfect linear color space of the first 2 ping-pong PostProcesses (or planes), which just feed each other unbounded linear color space numbers (way outside the color rgb float 0.0-1.0 range!).

Sorry for getting in the theory weeds there, but I hope I have clarified what is needed at this point in the project, and if I learn that it can’t be done with Babylon’s PostProcess (because maybe it wasn’t intended for this purpose), hopefully I can revert back to a fullscreen Babylon.js plane with my custom shader material that has the ability to set render targets to each other’s plane surfaces. Any info or suggestions are very welcome!

Thank you,


It’s easier to use the effect renderer/wrapper mechanism than post processes to set a ping/pong rendering. An effect renderer/wrapper is a small wrapper around a shader that let’s you run this shader and generate the result in a texture you provide. Here’s an example:

Then you can use a “pass” post process that simply copy the last texture buffer and that will let you input this texture to an ImageProcessingPostProcess to apply tone mapping/exposure/…

Also, we should stop accumulating and restart when the camera is moved: it’s what camera.onViewMatrixChangedObservable is for.

Here’s a PG with those changes:

[EDIT] Forgot one thing: I created a PG as it is easier to update and see the results in realtime.


Thank you so much! I have to go into work atm, but as soon as I get back tonight, I will study your examples. Thanks also for creating the PGs, as, like you said, I can see how everything is setup and the effects in real time.

Also I’m glad you used the word ‘accumulating’ in your post with regards to the camera motion, because now that I think of it, that’s actually what I’m asking for in the end - an ‘accumulation’ buffer. Rather than ping ponging for a couple of frames, I need to ‘remember’ and retain every sample as they are generated, then the screenOutput’s job is to present to the screen, by bringing this unbounded linear color accumulation into 0.0-1.0 range by averaging by the number of frames so far, then with a tone mapper (or like you suggested, run it through a PostProcess image enhancer dedicated to that purpose), and also apply simple gamma correction by multiplying the final pixel color by the power of 0.4545 (if I recall correctly, I’ll have to review my old final output shader).

At any rate, thanks again for posting these helpful examples and suggestions! I’ll let you know how it goes once I’ve had a moment to try them!


That’s what the sample is doing: the current buffer = (last buffer * (numFrames-1) + currentColor) / numFrames. As we are ping ponging, the last buffer contains the average of all the samples so far.

The image processing post process is also doing the gamma conversion (after applying the tone mapping, exposure, etc), no need for a specific computation.

In fact, it is doing exactly what you described above:

Actually once I get step 1 working, there will be a 3rd and final PostProcess (or plane with shader material) called screenOutput. This last PostProcess or plane will be in charge of averaging the millions of samples collected along the way, applying gamma correction, and tone mapping, and showing or rendering to screen. It does not have a render target. This is a separate entity from the other 2 because its gamma corrections and tone mapping would ‘pollute’ the mathematically perfect linear color space of the first 2 ping-pong PostProcesses (or planes), which just feed each other unbounded linear color space numbers (way outside the color rgb float 0.0-1.0 range!).

The ping pong buffers contain the averaged linear values, and the image processing post process is doing the tone mapping + gamma correction: it is doing it each frame on the output from the current ping pong buffer (but it does not modify this buffer!).

1 Like

It’s going well so far, thanks to your super-helpful examples! I had one minor question before I merge the new system into the GitHub repo:
When declaring an effect wrapper, I want to specify a file location where my path tracing shader is, named pathTracing.fragment.fx . Thinking I could use the same pattern as declaring a new PostProcess, I have tried the following without success:

const eWrapper = new BABYLON.EffectWrapper({
engine: engine,
fragmentShader: "./shaders/pathTracing",
uniformNames: ["uResolution", "uULen", "uVLen", "uTime", "uFrameCounter", "uEPS_intersect", "uCameraMatrix"],
samplerNames: ["previousBuffer"],
name: "effect wrapper"


As you can see, I tried putting the relative path, which works when instantiating a new PostProcess, but not so much here.

If it is not possible, I can just change my .fx files to ShadersStore and paste them at the bottom of my setup .js file, like you did with your playground example.

Thanks again!

The effect wrapper does not work with paths: you can either directly pass the source code (fragmentShader: “source code here”) or put the code in the shader store and set useShaderStore: true and fragmentShader: “name to look up in the shader store”.

1 Like