Path-tracing in BabylonJS

I see more and more demo like these showing the use of path-tracing with WebGL:
https://dassaultsystemes-technology.github.io/dspbr-pt/
http://madebyevan.com/webgl-path-tracing/

I also know some apps which use it for image rendering purpose.
I was wondering if we could see that added to BabylonJS and what would be the difficulties?

Cheers everyone! :smiley:

4 Likes

I made some research to try and understand how these demos work and how we could maybe use it in BabylonJS.

Here is a list of app/demo I have found (I advise you to open one link at a time :sweat_smile:):
Only Spheres and Cubes:

If you look at the Github project and shader code, we can see that they are made to render only spheres and cubes so not sure we could use them.

Full Scene:

Therewith the GitHub projects, it becomes harder for me to understand how it works :innocent:. But as they manage to make path tracing rendered with an environment texture and a gltf object, I guess it means it would be possible to actually do that in BabylonJS?
I think the ThreeJS use case is interesting also as it seems that Eric managed to have Path-tracing that can be applied to any ThreeJS scene.

Other WebGL/Path-tracing projects found:

2 Likes

If the path tracing is entirely done by a shader then using the ShaderMaterial should be enough

All the complexity remains in the shader code itself

1 Like

Hi @Deltakosh, thanks for the answer.

I took a look at THREE.js-PathTracing-Renderer again. There are a lot of shaders here so it seems that the path-tracing code in not ThreeJS dependant.

Indeed in the example, we can see this they also create a “simple” shader material as you suggested:

pathTracingMaterial = new THREE.ShaderMaterial({
                    uniforms: pathTracingUniforms,
                    defines: pathTracingDefines,
                    vertexShader: pathTracingVertexShader,
                    fragmentShader: pathTracingFragmentShader,
                    depthTest: false,
                    depthWrite: false
            });

I also saw that pathTracingUniforms variable does contain the scene rendering texture which contains the 3D assets so that confirms that we can possibly use any scene.
I guess this means that we could “simply” (but it won’t be simple :laughing:) reproduce what is done with ThreeJS but with BabylonJS ?

I hope Erich Loftis is ok with retro engineering :upside_down_face:

3 Likes

I agree :slight_smile:

3 Likes

Hello all!
I’m Erich, the developer of the three.js PathTracing Renderer. @PichouPichou has recently contacted me and asked if I would help build a similar renderer using the Babylon.js library. I replied “I’m in!”.

@Deltakosh I must admit that I have been mainly involved with the three.js library way back shortly after Ricardo first started it. But I always have respected the open-source and feature-rich Babylon.js library and as I mentioned to Valentin, kudos to you David and your team! It’s been a while since I tracked the progress or sat down and tried all the demos on the example page - but wow, Babylon has come a long way over the years!

Valentin suggested I post on this thread in hopes that our public discussions and collaborations might be of benefit to all interested in the world of ray/path tracing and Babylon.js. Now I am not an expert in path tracing, the math behind it, or GPU programming, but over the last 5 years while working on my project I have been able to put the necessary pieces together to make real time path tracing happen inside the browser, whether on desktop or mobile.

Having said that, my project does have some hacks that I’ve had to come up with to get it to either work with three.js or WebGL, or both. One of these is that the GeometryShowcase demo for instance, which looks like a typical demo from the core library displaying its core shapes (like sphere, box, cylinder, etc.) actually does not use the three.js code for those shapes at all. Instead, they are defined mathematically in the fragment shader. If I did use the three.js building code for defining these shapes, I would end up, as you’re well aware, with a bunch of triangles to feed the path tracer. And as you also know, rendering any significant amount of triangles is very expensive and without a BVH, it becomes non-real time very quickly. So all basic shapes in my entire project use well-known math routines in the standard ray tracing libraries. But all the triangle models use the actual triangle data (stored in a data texture on the GPU) and then the fragment shader steps through the pre-built (.js code) BVH and renders whatever triangles you feed it, like any major renderer out there.

I suggested to Valentin that we start off simple, and just try to get a simple sphere scene working first, and then maybe down the road we can add true triangle model rendering. Oh I forgot to mention that even though the renderer fragment shader does not call the library’s shape definition routines, it still can rely on the library’s notion of an object transform. In three.js, this is called an Object3D() and defines a matrix for position, scale, and rotation. I’m sure Babylon has an equivalent data structure. This allows the .js file to feed the shader the transform of every scene object, even though the .js file doesn’t even know what shape it is referring to - it is simply an empty object placeholder with a transform.

So even though the simple demos don’t rely on the core shapes of the host library, they definitely benefit from all the math/matrix routines going on in JavaScript behind the scenes as the scene is updated many times a second. Also, having a host library like Three or Babylon provides automatic support for loading models, loading textures, user input, window sizing, WebGL management, UI, etc… In fact having a library to build off of is the reason I ‘piggybacked’ Three, as opposed to something like MadeByEvan’s older path tracer, that did everything from scratch. I definitely needed/need a WebGL library in place to build the path tracer on! :wink:

Well, sorry for the long-winded intro but I wanted to be as clear and open as possible about the challenges and benefits of building a renderer on top of a WebGL library. To David: if we run into issues along the way (and I’m sure we will!) when trying to implement this using Babylon, I hope that you can guide or help us, or point us in the right direction.

Looking forward to working with you all! :slight_smile:
-Erich

17 Likes

Thank you so much, @erichlof, this is amazing :smile:

3 Likes

That is FANTASTIC.

First thanks a lot for your feedback, all the team is getting his energy for this kind of feedback!

Then let me officially welcome you to the family:) It is a pleasure and a honor to have you with us!

All the team (not just me) and the community will be here to help you along the way and by the way, I know that @cedric will be more than happy to help you as he is CURRENTLY working on a series of blog about path tracing :smiley:

Welcome!

4 Likes

I have been looking at BVH for Babylon.js Bounding volume hierarchies (BVH) for ray tracing with Morton Codes

@erichlof is this of use in producing an optimised official version in Babylon.js or do you already have code for this?

1 Like

Hello @JohnK ,

Wow that Morton Code BVH in .js is impressive! The BVH time was definitely faster than the Octree time in the Helmet demo. And the Sphere demo was really fast. Good work!

As to if you should implement this as the general Babylon.js ray tracing structure, or use something like I have, let me start off by admitting that of all the dozens of components necessary to implement ray tracing, the acceleration structure is my weakest area of knowledge. I have read (but mostly not understood) several research papers, some by companies like Nvidia, most from academia, about comparing different acceleration structure choices, or how they did their unique spin on the classic structures, like your Morton code example.

The good news for us that are using BVHs, is that most scholars and pro programmers alike seem to generally agree that the BVH comes out on top, over regular grids, over Octrees, and slightly over KD trees for ray tracing. This is evident in your Helmet example. Also, Nvidia’s RTX cards use only BVHs for their newest cutting edge demos. I’ve heard them mention it in a couple of technical presentations. In fact, if I’m not mistaken, they even have a dedicated processing unit on the cards that does nothing but BVH handling and traversal. So if they’re doing it, we can’t be too far off the mark! Ha

Unfortunately when trying to implement a BVH in the Nvidia manner, that structure building and traversal is proprietary as their hardware sales and ray tracing software rely on it. Therefore, when I have read an Nvidia paper, I feel like I’m not getting the whole picture, or the whole source code, in order for me to implement it myself.

Contrast that with a paper from academia (often as part of a thesis or dissertation) and you may get more source code, or Github link if you’re lucky, but I personally find it hard to wade through the formal, sometimes math-heavy text. Since it comes from an old tradition, I feel that the authors are conforming to a more formal academic research paper dynamic. Which doesn’t bode well for me, a non-CS, non-math degreed hobbyist coder!

Please take a look at my BVH js builder. I credit the original author at the top of the file. His Github BVH C++ code inspired me and kind of directed me in how to approach this complex topic. As for the GPU storage and traversal of this structure, I must confess that I had to reverse engineer some minimized fragment shader code inside the Antimatter WebGL path tracer upon inspection in the browser developer toolbar. This is because it was not open source or on Github anywhere. As ‘penance’ for borrowing like that, I added spaces and sensible variable and function names to the minified code manually (working with lines like int g = c(h, j); ) and worked through it line by line. Since then I have made it my own and optimized it and made it work for WebGL 2, so I don’t feel so bad for using copied code, ha.

So if you feel that the Morton codes would speed up building, by all means go with it! I must admit I really don’t understand Morton codes yet and how to implement something like that inside the BVH building. Having said that though, I feel like my current traditional BVH runs pretty fast on the GPU in the browser for simple scenes.

This reply is already too long, but in a later post I will give an overview of how the BVH building is done, how it is stored on the GPU, and how it is traversed on my current renderer. That way, anyone who is interested in diving deeper into that world can hopefully benefit from our discussion and comments. Who knows, maybe someone will help me improve it, or help me get it to work with large models on mobile (which is still an outstanding issue that I don’t fully know how to deal with)!

Thanks for contributing to this thread!

1 Like

@Deltakosh
That’s great! Thank you for the friendly welcome. I’m so glad that you and others will be willing to help. I think that @PichouPichou will be creating a public GitHub repo for this project, so we can all see what’s happening and work together. From what I’ve seen of the demos, Babylon.js already has all the necessary components to make this happen. I’m confident that we’ll get something up and running relatively soon!

1 Like

@erichlof thank you for your reply. As you have a tried and tested means, both in Javascript and for the GPU, to generater BVH then it makes sense for those to be ported for Babylon.js as path tracing is brought into BJS. As for using Morton codes over other methods I am not sure that there would be any gain or if there is how significant it would be over your BVH build. Also working with the GPU is not in my skill set.

I will follow developments with interest and once completed try to pick up on using BVH for collisions of complex meshes with many triangles.

Its great to have you on board.

2 Likes

Regarding BVH (and for a lot of subjects related to computer graphics too), a great resource is the “Physically Based Rendering - From Theory to Implementation” book: http://www.pbr-book.org/

In chapter 4, you get the full implementation of BVH (in C++ but easily understandable) with detailed explanations.

2 Likes

Hey, everybody!

It’s really nice to see such motivation in this project. I know I can thank you all in advance for your help and contribution.

I have to admit that I don’t understand everything you say about the different methods of path tracing, but it’s really nice to read you already discussing the different methods!

We are seeing more and more path-tracking demos using WebGL and I think it’s time BabylonJS had its own so we can all have fun with this rendering technique!

To start officially, we just need a GitHub project that I’ll share with you as soon as possible with my first progress thanks to @erichlof explanations he already sent me by email.

Cheers to everyone, @PichouPichou

4 Likes

Hello! Ah yes, I forgot about the pbrt BVH chapter. I had recently gone through the entire book and I kind of skimmed over the BVH chapter because I had just got mine working - so I think I must have moved on to the next chapter, ha. But now I will go back and more carefully study it. btw the partitioning algorithm that I use is the SAH (surface area heuristic). It is in line with and similar to the one in the pbrt book chapter 4 you referred to. However, when trying to do the binning for best splitting plane position along the chosen axis (X, Y, or Z), it sometimes hung up and crashed on me because it couldn’t resolve the minimum cost (no matter where it tried moving the split), which is required by the SAH. It might be a Javascript precision issue when you get groups of really close triangles that you have to split up. So I ended up still trying to do a minimum cost SAH, but I only compare each of the 3 cardinal axes with the split plane at the exact midpoint of that dimension of the box. Therefore, in the end, I kind of have a hybrid of the two pbrt choices - SAH and Middle. I believe the other choices were Morton(LBVH) and equalCounts of triangles in each child.
If those reading this do not know what we’re talking about, I will explain at a later point in detail how all this works with my current renderer. Don’t worry, it took me a long time to come to grips with this complex system! :slight_smile:

3 Likes

Hi everyone.

Here it is for the Github: GitHub - nakerwave/BabylonJS-PathTracing-Renderer

Basically in this first step, I translated the use of ThreeJS in the corresponding BabylonJS.
@erichlof I don’t know if you are familiar with Typescript but BabylonJS is using it and I am also. Plus I use the EcmaScript Module of BabylonJS so that we import only what we need, and with Typescript it makes autocompletion very powerful!

In the src folder you will find 3 main files:

I also started a demos/Geometry folder for further step.

I have several questions of course after these early developments! :grin:
For @erichlof

  • I don’t understand what is screenTextureShader.uniforms value? There is a uniforms parameter in the creation of BabylonJS ShaderMaterial but this is an Array of string matching with shader variables. In order to help, here is the link to BabylonJS ShaderMaterial documentation and ShaderMaterial Class
  • What is the difference between elapseTime and frameTime?

And for BabylonJS Team:

  • I have used BABYLONJS RenderTargetTexture to replace THREEJS WebGLRenderTarget but the options are not exactly the same. Maybe someone could take a look to see if I have put the right one. For instance, I don’t know what minFilter and magFilter equivalent is in BabylonJS, maybe boundingBoxSise? RenderTarget Class

Other comments:

  • Instead of triggering every event with a custom cameraControlsObject, I think the best would be to just attach the control to the worldCamera to let BabylonJS manage the camera input and then use the worldCamera coordinates to update the scene. I guess we will have to extract the fov for the shader from the Camera position for instance.
  • I feel this article about RederTargetTexture and Multipass will help in further progress.

Of course, any idea about the project to facilitate our collaboration is welcome and I am very happy that it is really getting off the ground and I can’t wait to see it come to life! :innocent:

3 Likes

Hi Valentin, great work so far!

Yes I am ok with Typescript - I don’t use it every day, but I can sort of get the gist of what’s happening and how it’s different from Javascript.

As to your questions, yes sorry about the screenTextureShader.uniforms value being in another file. If you look at the top of my pathTracingCommon.js file, you’ll see those uniforms defined. I realize now that I should have been more consistent in my shader text handling. I think why I did it this way is that screenTextureShader and screenOutputShader were both so short and changed hardly at all throughout this project’s development, that I decided just to put the uniforms, vertex shader code, and fragment shader code in single quote strings and ask Three to just load them in. Now for the pathTracingVertexShader and pathTracingFragmentShader, these are the heart of the path tracer and must be constantly edited, and so instead of having to work with single quotes everywhere in my editor, I just loaded the GLSL code into Visual Studio Code, enabled GLSL language support, and then I have linting, syntax highlighting, etc. on the file. In hindsight, and also for moving forward, it might be better to have all the vertex and fragment shaders in their own files, some being very tiny, the main pathtracing fragment being huge, and then save them in their native GLSL extension in a ‘shaders’ folder. In other words no quotes, a pure GLSL file. But that is only if Babylon has the capability to load an arbitrary file like that and turn it into text to be given to the material as the vertex and fragment shaders and uniforms. If not, we will have to put every line of GLSL code in single quotes (or maybe there’s a way to do it with the whole file in 1 set of quotes?). I’m fine with either way, but I think it should be consistent across all shaders for clarity sake.

About elapsedTime and frameTime, elapsedTime is like a second counter (like a regular upwards counting second hand on a real clock), whereas frameTime is how much time has passed since the last frame. If the app is not running at constant 60 fps, then this number can vary randomly from frame to frame. I need both notions of time because frameTime is like the traditional deltaTime and is used to multiply against anything that moves in the scene that is on the Javascript side of things - so that if the user’s framerates are not consistent, at least the apparent movement-speed of objects will be. ‘elapsedTime’ is useful for feeding the uTime incremental shader uniform so that, for instance if there is moving water waves, or moving clouds, their uvs can be driven by this.

Sorry again about the inconsistency of shader texts and the file hopping between definitions. When I first got all this working, I was just so happy that I moved on to the next TODO, without thinking much how to unify and clarify all the setup bootstrap code and maybe give more thoughtful variable names.

I’m confident however that Babylon.js has all the necessary components to make it work, and maybe in the process of porting, we will be able to refactor and clean up the setup code.

Thanks again for starting the repo!

3 Likes

Use the samplingMode parameter to set the min/max filter.

In 3js:

minFilter: THREE.NearestFilter,
magFilter: THREE.NearestFilter,

means to use nearest filtering when the texture is minified and magnified.

There’s a generateMipmaps option (not described in the 3js doc): if not provided, no mipmaps are generated (so what you did is ok as you pass false for the generateMipmaps parameter of RTT).

So, the value to pass for samplingMode is Constants.TEXTURE_NEAREST_SAMPLINGMODE (mag = nearest and min = nearest and no mip).

For the type parameter you should pass Constants.TEXTURETYPE_FLOAT.

For format, use Constants.TEXTUREFORMAT_RGBA.

3 Likes

Thanks for both your answers. This is really helpful!

@erichlof what is the math in order to have elapseTime then? I can have the fps and the frameRate from BabylonJS but nothing about elapseTime though :wink:
I agree the shader sould be managed in separated glsl file in order to make it easy to modifiy it. This will really help in the long term if we need to make shader modification.

I will come back to you once the modifications have been done. :upside_down_face:

1 Like

Hope nobody minds me butting in about elapsedTime and frameTime and that I have it correct.

In this PG https://www.babylonjs-playground.com/#KT9EE7#27

The (XZ) movement of the sphere is based on frame time, its change in position (XZ), per frame, is given by velocity * frameTime

The movement of the surface is based on elapsedTime, its shape is given by a function with parameter angle, the angle at any time depends on the elapsedTime, so the shape is a function of elapsedTime.

In this case the elapsedTime is given by the ‘performance.now()’ method, however it could also be found using

elapsedTime += frameTime;

The y position of the sphere is calculated directly from its current (x, z) position from the ribbon array.

2 Likes