Path-tracing in BabylonJS

Hi @Deltakosh, thanks for the answer.

I took a look at THREE.js-PathTracing-Renderer again. There are a lot of shaders here so it seems that the path-tracing code in not ThreeJS dependant.

Indeed in the example, we can see this they also create a “simple” shader material as you suggested:

pathTracingMaterial = new THREE.ShaderMaterial({
                    uniforms: pathTracingUniforms,
                    defines: pathTracingDefines,
                    vertexShader: pathTracingVertexShader,
                    fragmentShader: pathTracingFragmentShader,
                    depthTest: false,
                    depthWrite: false
            });

I also saw that pathTracingUniforms variable does contain the scene rendering texture which contains the 3D assets so that confirms that we can possibly use any scene.
I guess this means that we could “simply” (but it won’t be simple :laughing:) reproduce what is done with ThreeJS but with BabylonJS ?

I hope Erich Loftis is ok with retro engineering :upside_down_face:

1 Like

I agree :slight_smile:

2 Likes

Hello all!
I’m Erich, the developer of the three.js PathTracing Renderer. @PichouPichou has recently contacted me and asked if I would help build a similar renderer using the Babylon.js library. I replied “I’m in!”.

@Deltakosh I must admit that I have been mainly involved with the three.js library way back shortly after Ricardo first started it. But I always have respected the open-source and feature-rich Babylon.js library and as I mentioned to Valentin, kudos to you David and your team! It’s been a while since I tracked the progress or sat down and tried all the demos on the example page - but wow, Babylon has come a long way over the years!

Valentin suggested I post on this thread in hopes that our public discussions and collaborations might be of benefit to all interested in the world of ray/path tracing and Babylon.js. Now I am not an expert in path tracing, the math behind it, or GPU programming, but over the last 5 years while working on my project I have been able to put the necessary pieces together to make real time path tracing happen inside the browser, whether on desktop or mobile.

Having said that, my project does have some hacks that I’ve had to come up with to get it to either work with three.js or WebGL, or both. One of these is that the GeometryShowcase demo for instance, which looks like a typical demo from the core library displaying its core shapes (like sphere, box, cylinder, etc.) actually does not use the three.js code for those shapes at all. Instead, they are defined mathematically in the fragment shader. If I did use the three.js building code for defining these shapes, I would end up, as you’re well aware, with a bunch of triangles to feed the path tracer. And as you also know, rendering any significant amount of triangles is very expensive and without a BVH, it becomes non-real time very quickly. So all basic shapes in my entire project use well-known math routines in the standard ray tracing libraries. But all the triangle models use the actual triangle data (stored in a data texture on the GPU) and then the fragment shader steps through the pre-built (.js code) BVH and renders whatever triangles you feed it, like any major renderer out there.

I suggested to Valentin that we start off simple, and just try to get a simple sphere scene working first, and then maybe down the road we can add true triangle model rendering. Oh I forgot to mention that even though the renderer fragment shader does not call the library’s shape definition routines, it still can rely on the library’s notion of an object transform. In three.js, this is called an Object3D() and defines a matrix for position, scale, and rotation. I’m sure Babylon has an equivalent data structure. This allows the .js file to feed the shader the transform of every scene object, even though the .js file doesn’t even know what shape it is referring to - it is simply an empty object placeholder with a transform.

So even though the simple demos don’t rely on the core shapes of the host library, they definitely benefit from all the math/matrix routines going on in JavaScript behind the scenes as the scene is updated many times a second. Also, having a host library like Three or Babylon provides automatic support for loading models, loading textures, user input, window sizing, WebGL management, UI, etc… In fact having a library to build off of is the reason I ‘piggybacked’ Three, as opposed to something like MadeByEvan’s older path tracer, that did everything from scratch. I definitely needed/need a WebGL library in place to build the path tracer on! :wink:

Well, sorry for the long-winded intro but I wanted to be as clear and open as possible about the challenges and benefits of building a renderer on top of a WebGL library. To David: if we run into issues along the way (and I’m sure we will!) when trying to implement this using Babylon, I hope that you can guide or help us, or point us in the right direction.

Looking forward to working with you all! :slight_smile:
-Erich

8 Likes

Thank you so much, @erichlof, this is amazing :smile:

1 Like

That is FANTASTIC.

First thanks a lot for your feedback, all the team is getting his energy for this kind of feedback!

Then let me officially welcome you to the family:) It is a pleasure and a honor to have you with us!

All the team (not just me) and the community will be here to help you along the way and by the way, I know that @cedric will be more than happy to help you as he is CURRENTLY working on a series of blog about path tracing :smiley:

Welcome!

2 Likes

I have been looking at BVH for Babylon.js Bounding volume hierarchies (BVH) for ray tracing with Morton Codes

@erichlof is this of use in producing an optimised official version in Babylon.js or do you already have code for this?

Hello @JohnK ,

Wow that Morton Code BVH in .js is impressive! The BVH time was definitely faster than the Octree time in the Helmet demo. And the Sphere demo was really fast. Good work!

As to if you should implement this as the general Babylon.js ray tracing structure, or use something like I have, let me start off by admitting that of all the dozens of components necessary to implement ray tracing, the acceleration structure is my weakest area of knowledge. I have read (but mostly not understood) several research papers, some by companies like Nvidia, most from academia, about comparing different acceleration structure choices, or how they did their unique spin on the classic structures, like your Morton code example.

The good news for us that are using BVHs, is that most scholars and pro programmers alike seem to generally agree that the BVH comes out on top, over regular grids, over Octrees, and slightly over KD trees for ray tracing. This is evident in your Helmet example. Also, Nvidia’s RTX cards use only BVHs for their newest cutting edge demos. I’ve heard them mention it in a couple of technical presentations. In fact, if I’m not mistaken, they even have a dedicated processing unit on the cards that does nothing but BVH handling and traversal. So if they’re doing it, we can’t be too far off the mark! Ha

Unfortunately when trying to implement a BVH in the Nvidia manner, that structure building and traversal is proprietary as their hardware sales and ray tracing software rely on it. Therefore, when I have read an Nvidia paper, I feel like I’m not getting the whole picture, or the whole source code, in order for me to implement it myself.

Contrast that with a paper from academia (often as part of a thesis or dissertation) and you may get more source code, or Github link if you’re lucky, but I personally find it hard to wade through the formal, sometimes math-heavy text. Since it comes from an old tradition, I feel that the authors are conforming to a more formal academic research paper dynamic. Which doesn’t bode well for me, a non-CS, non-math degreed hobbyist coder!

Please take a look at my BVH js builder. I credit the original author at the top of the file. His Github BVH C++ code inspired me and kind of directed me in how to approach this complex topic. As for the GPU storage and traversal of this structure, I must confess that I had to reverse engineer some minimized fragment shader code inside the Antimatter WebGL path tracer upon inspection in the browser developer toolbar. This is because it was not open source or on Github anywhere. As ‘penance’ for borrowing like that, I added spaces and sensible variable and function names to the minified code manually (working with lines like int g = c(h, j); ) and worked through it line by line. Since then I have made it my own and optimized it and made it work for WebGL 2, so I don’t feel so bad for using copied code, ha.

So if you feel that the Morton codes would speed up building, by all means go with it! I must admit I really don’t understand Morton codes yet and how to implement something like that inside the BVH building. Having said that though, I feel like my current traditional BVH runs pretty fast on the GPU in the browser for simple scenes.

This reply is already too long, but in a later post I will give an overview of how the BVH building is done, how it is stored on the GPU, and how it is traversed on my current renderer. That way, anyone who is interested in diving deeper into that world can hopefully benefit from our discussion and comments. Who knows, maybe someone will help me improve it, or help me get it to work with large models on mobile (which is still an outstanding issue that I don’t fully know how to deal with)!

Thanks for contributing to this thread!

@Deltakosh
That’s great! Thank you for the friendly welcome. I’m so glad that you and others will be willing to help. I think that @PichouPichou will be creating a public GitHub repo for this project, so we can all see what’s happening and work together. From what I’ve seen of the demos, Babylon.js already has all the necessary components to make this happen. I’m confident that we’ll get something up and running relatively soon!

@erichlof thank you for your reply. As you have a tried and tested means, both in Javascript and for the GPU, to generater BVH then it makes sense for those to be ported for Babylon.js as path tracing is brought into BJS. As for using Morton codes over other methods I am not sure that there would be any gain or if there is how significant it would be over your BVH build. Also working with the GPU is not in my skill set.

I will follow developments with interest and once completed try to pick up on using BVH for collisions of complex meshes with many triangles.

Its great to have you on board.

1 Like

Regarding BVH (and for a lot of subjects related to computer graphics too), a great resource is the “Physically Based Rendering - From Theory to Implementation” book: http://www.pbr-book.org/

In chapter 4, you get the full implementation of BVH (in C++ but easily understandable) with detailed explanations.

1 Like

Hey, everybody!

It’s really nice to see such motivation in this project. I know I can thank you all in advance for your help and contribution.

I have to admit that I don’t understand everything you say about the different methods of path tracing, but it’s really nice to read you already discussing the different methods!

We are seeing more and more path-tracking demos using WebGL and I think it’s time BabylonJS had its own so we can all have fun with this rendering technique!

To start officially, we just need a GitHub project that I’ll share with you as soon as possible with my first progress thanks to @erichlof explanations he already sent me by email.

Cheers to everyone, @PichouPichou

2 Likes

Hello! Ah yes, I forgot about the pbrt BVH chapter. I had recently gone through the entire book and I kind of skimmed over the BVH chapter because I had just got mine working - so I think I must have moved on to the next chapter, ha. But now I will go back and more carefully study it. btw the partitioning algorithm that I use is the SAH (surface area heuristic). It is in line with and similar to the one in the pbrt book chapter 4 you referred to. However, when trying to do the binning for best splitting plane position along the chosen axis (X, Y, or Z), it sometimes hung up and crashed on me because it couldn’t resolve the minimum cost (no matter where it tried moving the split), which is required by the SAH. It might be a Javascript precision issue when you get groups of really close triangles that you have to split up. So I ended up still trying to do a minimum cost SAH, but I only compare each of the 3 cardinal axes with the split plane at the exact midpoint of that dimension of the box. Therefore, in the end, I kind of have a hybrid of the two pbrt choices - SAH and Middle. I believe the other choices were Morton(LBVH) and equalCounts of triangles in each child.
If those reading this do not know what we’re talking about, I will explain at a later point in detail how all this works with my current renderer. Don’t worry, it took me a long time to come to grips with this complex system! :slight_smile:

2 Likes

Hi everyone.

Here it is for the Github: GitHub - nakerwave/BabylonJS-PathTracing-Renderer

Basically in this first step, I translated the use of ThreeJS in the corresponding BabylonJS.
@erichlof I don’t know if you are familiar with Typescript but BabylonJS is using it and I am also. Plus I use the EcmaScript Module of BabylonJS so that we import only what we need, and with Typescript it makes autocompletion very powerful!

In the src folder you will find 3 main files:

I also started a demos/Geometry folder for further step.

I have several questions of course after these early developments! :grin:
For @erichlof

  • I don’t understand what is screenTextureShader.uniforms value? There is a uniforms parameter in the creation of BabylonJS ShaderMaterial but this is an Array of string matching with shader variables. In order to help, here is the link to BabylonJS ShaderMaterial documentation and ShaderMaterial Class
  • What is the difference between elapseTime and frameTime?

And for BabylonJS Team:

  • I have used BABYLONJS RenderTargetTexture to replace THREEJS WebGLRenderTarget but the options are not exactly the same. Maybe someone could take a look to see if I have put the right one. For instance, I don’t know what minFilter and magFilter equivalent is in BabylonJS, maybe boundingBoxSise? RenderTarget Class

Other comments:

  • Instead of triggering every event with a custom cameraControlsObject, I think the best would be to just attach the control to the worldCamera to let BabylonJS manage the camera input and then use the worldCamera coordinates to update the scene. I guess we will have to extract the fov for the shader from the Camera position for instance.
  • I feel this article about RederTargetTexture and Multipass will help in further progress.

Of course, any idea about the project to facilitate our collaboration is welcome and I am very happy that it is really getting off the ground and I can’t wait to see it come to life! :innocent:

2 Likes

Hi Valentin, great work so far!

Yes I am ok with Typescript - I don’t use it every day, but I can sort of get the gist of what’s happening and how it’s different from Javascript.

As to your questions, yes sorry about the screenTextureShader.uniforms value being in another file. If you look at the top of my pathTracingCommon.js file, you’ll see those uniforms defined. I realize now that I should have been more consistent in my shader text handling. I think why I did it this way is that screenTextureShader and screenOutputShader were both so short and changed hardly at all throughout this project’s development, that I decided just to put the uniforms, vertex shader code, and fragment shader code in single quote strings and ask Three to just load them in. Now for the pathTracingVertexShader and pathTracingFragmentShader, these are the heart of the path tracer and must be constantly edited, and so instead of having to work with single quotes everywhere in my editor, I just loaded the GLSL code into Visual Studio Code, enabled GLSL language support, and then I have linting, syntax highlighting, etc. on the file. In hindsight, and also for moving forward, it might be better to have all the vertex and fragment shaders in their own files, some being very tiny, the main pathtracing fragment being huge, and then save them in their native GLSL extension in a ‘shaders’ folder. In other words no quotes, a pure GLSL file. But that is only if Babylon has the capability to load an arbitrary file like that and turn it into text to be given to the material as the vertex and fragment shaders and uniforms. If not, we will have to put every line of GLSL code in single quotes (or maybe there’s a way to do it with the whole file in 1 set of quotes?). I’m fine with either way, but I think it should be consistent across all shaders for clarity sake.

About elapsedTime and frameTime, elapsedTime is like a second counter (like a regular upwards counting second hand on a real clock), whereas frameTime is how much time has passed since the last frame. If the app is not running at constant 60 fps, then this number can vary randomly from frame to frame. I need both notions of time because frameTime is like the traditional deltaTime and is used to multiply against anything that moves in the scene that is on the Javascript side of things - so that if the user’s framerates are not consistent, at least the apparent movement-speed of objects will be. ‘elapsedTime’ is useful for feeding the uTime incremental shader uniform so that, for instance if there is moving water waves, or moving clouds, their uvs can be driven by this.

Sorry again about the inconsistency of shader texts and the file hopping between definitions. When I first got all this working, I was just so happy that I moved on to the next TODO, without thinking much how to unify and clarify all the setup bootstrap code and maybe give more thoughtful variable names.

I’m confident however that Babylon.js has all the necessary components to make it work, and maybe in the process of porting, we will be able to refactor and clean up the setup code.

Thanks again for starting the repo!

1 Like

Use the samplingMode parameter to set the min/max filter.

In 3js:

minFilter: THREE.NearestFilter,
magFilter: THREE.NearestFilter,

means to use nearest filtering when the texture is minified and magnified.

There’s a generateMipmaps option (not described in the 3js doc): if not provided, no mipmaps are generated (so what you did is ok as you pass false for the generateMipmaps parameter of RTT).

So, the value to pass for samplingMode is Constants.TEXTURE_NEAREST_SAMPLINGMODE (mag = nearest and min = nearest and no mip).

For the type parameter you should pass Constants.TEXTURETYPE_FLOAT.

For format, use Constants.TEXTUREFORMAT_RGBA.

2 Likes

Thanks for both your answers. This is really helpful!

@erichlof what is the math in order to have elapseTime then? I can have the fps and the frameRate from BabylonJS but nothing about elapseTime though :wink:
I agree the shader sould be managed in separated glsl file in order to make it easy to modifiy it. This will really help in the long term if we need to make shader modification.

I will come back to you once the modifications have been done. :upside_down_face:

Hope nobody minds me butting in about elapsedTime and frameTime and that I have it correct.

In this PG https://www.babylonjs-playground.com/#KT9EE7#27

The (XZ) movement of the sphere is based on frame time, its change in position (XZ), per frame, is given by velocity * frameTime

The movement of the surface is based on elapsedTime, its shape is given by a function with parameter angle, the angle at any time depends on the elapsedTime, so the shape is a function of elapsedTime.

In this case the elapsedTime is given by the ‘performance.now()’ method, however it could also be found using

elapsedTime += frameTime;

The y position of the sphere is calculated directly from its current (x, z) position from the ribbon array.

1 Like

You are correct @JohnK, thank you!

Yes @PichouPichou, like JohnK said, elapsedTime can be acquired by calling performance.now() every animation frame.

let elapsedTime;

…and inside the tight .js animation loop,…

elapsedTime = performance.now() * 0.001;

elapsedTime (via performance.now()) will return an ever-increasing accurate timer/counter in milliseconds, so that is why you have to multiply it by 0.001 or divide by 1000 to go from milliseconds to seconds. If I’m not mistaken, I believe it counts upwards from when the webpage was first navigated to (which would be time 0.0000…).

One additional thing to keep in mind is that if the path tracing shader is depending on this elapsedTime to drive uv texture movement or spin a spinning object over and over again, the accuracy becomes poor once the elapsedTime number gets too large. So to combat this, I put an additional modulo operation on elapsedTime, like so:
elapsedTime = (perfomance.now() * 0.001) % 1000;

Once the app has been running for a 1000 seconds, it starts over at 0.0. This is an arbitrary number - but it should be small enough to keep elapsedTime to a reasonable floating point number in seconds when it is used in math operations in the shader, and large enough so that the user will likely not see it reset to 0 (which might be jarring if it is controlling wave motion or smooth angle). I suppose the number could be a multiple of 2Pi that is somewhere around 1000 so that if it indeed was used for angular velocity, it would smoothly reset/repeat. But I don’t know if that is too important.

Thanks for all your work! :slight_smile:

Have done a Typescript version of Eric’s BVH_Acc_Structure_Iterative_Builder along with a PR to GitHub - nakerwave/BabylonJS-PathTracing-Renderer . Not at all that familiar with Typescript so it will need improving and checking. Hope it is helpful.

4 Likes

Hello everyone!

Now that people are working on porting my renderer over to the Babylon.js engine, I was thinking that this would be a good time and place to give sort of a general overview of how ray/path tracing works inside the Webgl2 system and the browser. That way, all who are interested, but who might not be able to contribute to the source, might benefit from the details and approaches that I took.

Warning: sorry, this will be an epic-length, multi-part post! I hope everyone is ok with that! :slight_smile:

Part 1 - Background

I’ll begin with some of the differences between traditional rendering and ray tracing. Traditional 3d polygon rendering makes up probably 99.9% of real-time games and graphics, no matter what system or platform you’re on. So if you’re using DirectX, openGL, WebGL2, software rasterizer, etc., you probably will do it/see it done this way. In a nutshell, vertices of 3d triangles are created on the host side (Javascript, C++, etc.) and sent to the graphics card via a vertex shader. The graphics card (from here on referred to as ‘GPU’), takes the input triangle vertex info and usually performs some kind of matrix transformation (perspective camera, etc.) on them to project the 3d vertices to 2d vertices on your screen. They are then fed to the fragment shader which rasterizes (fills in or draws) any flattened triangles’ pixels which are contained inside them.

The major benefit in doing it this way for several decades now, is that as GPUs got more widespread, they were built for this very technique and are now mind-numbingly fast at rendering millions of triangles on the screen at 60fps!

One of the major drawbacks to this seemingly no-brainer fast approach is that in the process of projecting the 3d scene onto the flat screen as small triangles, you lose global scene information that might be just outside the camera’s view. Most importantly, you lose lighting and shadow info from surrounding triangles that didn’t make the cut to get projected to the user’s final view, and there is no way of retrieving that info once the GPU is done with its work. You would have to do an exhaustive per-triangle look-up of the entire scene, which would make the game/graphics screech to a halt. Therefore, many techniques have been employed and much research has been done to make it seem to the user that none of the global Illumination and global scene info had been lost. Shadow maps, screen-space reflections, screen-space ambient occlusion, reflection probes, light maps, spherical harmonics, cube maps, etc. have all been used to try and capture the global scene info that is lost if you choose to render this way. Keep in mind that all these techniques are ultimately ‘fake’ and are sort of ‘hacks’ that are subject to error and require a lot of tweaking to get to look good for even one scene/game - then everything might not look right for the next title and the process begins again. Optical phenomena such as true global reflections, refractions through water and glass, caustics, transparency, become very difficult and even impossible with this approach. BTW this is known as a scene-centric or geometry-centric approach. In other words, vertex info and scene geometry take the front seat, global illumination and per pixel effects take a back seat.

Now another completely different rendering approach has been around the CG world for nearly as long - ray tracing. If the rasterizing approach was geometry-centric, then this ray tracing approach can be classified as pixel-centric. In this scenario on a traditional CPU, the renderer looks at 1 pixel at a time and deals with that pixel and only that pixel, then when it is done, it moves over to the next pixel, and the next, etc. When it is done with the bottom right pixel on your screen, the rendering is complete. Taking the first pixel, rather than giving it a vertex to possibly fill in or rasterize, the renderer asks the pixel, “What color are you supposed to be?”. So in this pixel-centric approach, the pixel color/lighting takes a front seat, and then the scene geometry is queried once it is required. The pixel answers, “Well let me shoot out a straight ray originating from the camera through me (that pixel of the screen viewing plane) out into the world and I’ll let you know!” Let’s say it runs into a bright light or the sun, great - a bright white or yellowish white color is recorded and we’re done! - well with that one pixel. Say the next pixel’s view ray hits a red ball. Ok, we have a choice - we can either record full red and move on to the next pixel and thus having a boring non-lit scene, or we could keep querying further for more info: like, “now that we’re on the ball, can you see the sun/light source from this part of the red ball?” If so we’ll make you bright red and move on, if you’re line of sight to the light is blocked, we’ll make you a darker red. How do we find this answer? We must spawn another ray out of the red ball’s surface, aim it at the light, and again see what the ray hits. If we turn the red ball into a mirror ball, a ray must be sent out according to the simple optical reflection formula, and then see what the reflected ray hits. If the ball was instead made out of glass, at least 2 more rays would need to be sent, one moving into the glass sphere, and one emerging from the back of the sphere out into the world behind it, whatever might be there. Although more complex, there are well known optical refraction formulas to aid us. But the main point is that with this rendering approach, the pixels are asked what they see along their rays, which might involve a complex path of many rays for one pixel, until either a ray escapes the scene, hits a light source, or a maximum number of reflections is reached (think of a room of mirrors - it would crash any computer if not stopped at some point). From the very first ray that comes from the camera, it has to query the entire scene to find out what it runs into and ‘sees’. Then, if it must continue on a reflected/refracted/shadow-query path, it must at each juncture query the entire scene again! No hacks or assumptions can be made. The whole global scene info must be made available at every single turn in the ray’s life.

As far as we know, this is how light behaves in the real world. Although a major difference is that everything in the real world happens in reverse: the rays spawn from the light source, out into the scene, reflecting/refracting, and only a tiny fraction is able to successfully ‘find’ and enter that first pixel in your camera’s sensor. However, this is a computational nightmare because the vast majority of light rays are ‘wasted’ - they don’t happen to enter that exact location of your camera’s single pixel. So we do it in reverse: loop through every viewing plane pixel (which is infinitely more efficient computationally) and instead send a ray out into the world hoping that it’ll eventually hit objects and ultimately a light source. Therefore the field of ray tracing is actually ‘backwards’ ray tracing.

The benefits of rendering this way are that, since we are synthesizing the physics of light rays and optics, images can be generated that are photo-realistic. Sometimes it is impossible to tell if an image is from a camera or a ray/path tracer. Since at each juncture of the ray’s path, the entire ‘global’ scene must be made available and interacted with, this technique is often referred to as global Illumination. All hacks and approximations disappear, and once difficult or impossible effects such as true reflections/refractions, area-light soft shadows, transparency, etc become easy, efficient, and nearly automatic. And these effects remain photo-realistic. No errors or tweaks on a per-scene basis necessary.

This all sounds great, but the biggest drawback to rendering this way is speed. Since we must query each pixel, and info cannot be shared between pixels (it is a truly parallel operation with individual ray paths for each individual pixel), we must wait for every pixel to run through the entire scene, making all the calculations, just to return a single color for 1 pixel out of a possible 1080p monitor. This is CPU style, by the way - the GPU hasn’t entered, yet. Many famous renderers and movies have been made using this approach, coded in C or C++, and run traditionally on the CPU only.

Then fairly recently, maybe 15 years ago, when programmable shaders for GPUs became a more widespread option, graphics programmers started to use GPUs to do some of the heavy lifting in ray tracing. Since it is an inherently parallel operation, and GPUs are built with parallelism in mind, the speed of rendering has gone way up. Renderings for movies that once took days, now take hours or even minutes. And in the cutting edge graphics world, ray tracing reflections and shadows (and to lesser extent, path tracing with true global Illumination), has even become real time at 30-60 fps!

My inspiration for starting the path tracing renderer was being fascinated by older Brigade 1 and Brigade 2 YouTube videos. Search for the Sam Lapere channel, who has 173 insane historic videos. He and his team (which included Jacco Bikker at one point) were able to make 10-15 year old computers approach real time path tracing. Also I was inspired by Kevin Beason’s smallPT: that he could fit a non real-time CPU renderer complete with global Illumination, into just 100 lines of C++! I used his simple but effective code to start with and moved it to the GPU with the help of Webgl and the browser. In the next part, I’ll get into details of how I had to set up everything in Webgl (with the help of a Javascript library) in order to get my first image on the screen inside of a browser.

So in conclusion, this was a really long way of saying that when you choose to go down this road, you are going against the grain, because GPUs were meant to rasterize as many triangles as possible in a traditional projection scheme. However, with a lot of help of some brilliant shader coders / ray tracing legends of yesteryear out there, and a decent acceleration structure, the dream of real time path tracing can be made into reality. And if you’re willing to work around the speed obstacles, you can have photo-realistic images (and even real time renderings) inside any commodity device with a common GPU and a browser! Part 2 will be coming soon! :slight_smile:

5 Likes