Path-tracing in BabylonJS

To get the mesh data, you should not access the private properties of Geometry but simply do:

mesh.getVerticesData("position") // => will return an array with the vertex coordinates
mesh.getVerticesData("uv") // => will return an array with the uv coordinates
mesh.getIndices() // => will return an array with the vertex indices. If you called convertToUnIndexedMesh you don't need this call as the mesh will be non indexed
7 Likes

Thank you guys!
@JohnK This is exactly what I was looking for! I knew there had to be some sort of equivalent to three.js’ .toNonIndexed() method. I don’t know why this function didn’t pop up in my GitHub search results for the Babylon repo. I don’t know if it’s just me, but the search feature for GitHub repos is finnicky and sometimes difficult to use - if you don’t type the exact search string, it won’t show in the results (for instance I typed nonIndexed and I guess I needed to type ‘unIndexed’. And if you use a too generalized search string like “indexed”, every single result shows up across the entire Babylon repo in no particular relevance order, and so I end up having to wade through 15+ pages of results. I hope Microsoft (who owns GitHub now) can improve the GitHub search engine.

Thank you also for the very helpful PG example! This allows me to see the flow of the steps to bring in and convert the model to my desired representation.

@Evgeni_Popov Thank you for the tips and best practices. I will definitely use your way because I don’t want to access hidden variables/structures when trying to get the vertex data. This helps a lot, thank you!

To all, hopefully I can soon load in an extract the vertex data for the Stanford Bunny model. I’ll keep you all posted on my progress. I’m sure I’ll need more assistance at the next junctures in this glTF loading process. Thanks again for your help!

3 Likes

Just thought I’d leave this here for those who are interested:

https://link.springer.com/book/10.1007/978-1-4842-7185-8

4 Likes

@adam Thanks!

I’m glad you posted the book release and links: I was trying to remember to keep checking NVIDIA’s news for this upcoming free book, but it somehow slipped my mind. From first glance at the table of contents, it looks like there’s a lot of good material to chew on!

Back in February I actually ‘threw my own hat in the ring’ with this series and submitted a couple of different article proposal ideas to the 3 main editors, Dr. Peter Shirley (who now works for NVIDIA, and is one of my ray tracing heroes) being one of them. Dr. Eric Haines (of Real Time Rendering, who now also works for NVIDIA) had seen the three.js path tracing renderer and suggested I try submitting an article idea to Ray Tracing Gems II. Surprised and humbled, I put forth my best effort and ideas, but ultimately Dr. Shirley said that although my WebGL2 path tracing project was cool, my entire system was not focused enough for a rigorous technical article, like the ones found in the RTG series. I subsequently tried narrowing it down in focus and scope, but my ideas weren’t novel/break-through enough I guess. In any case, I’m sure I will enjoy reading this new book and I hope to use some ideas in our rendering system here!

btw Adam, I like your profile pic (Commodore)! That C= image really brings back good memories of my first programming experience with BASIC and my beloved Commodore 64 when I was 11 in 1984 (yes I’m old, ha) :smiley:

6 Likes

We’re about the same age! I so wish I still had my original C64 bread bin (and Amiga 500).

What you’re doing here reminds me of how blown away I was when I saw the Amiga Juggler for the first time. Keep up the awesome work!

5 Likes

Ditto. Same vintage here. I started with C64, then Amiga 500 & 1200 … those were the days :slight_smile:

3 Likes

@adam and @inteja
Awesome! Those Commodore systems were truly the golden years of home computing. Not to go too far off topic on this thread, but if you’ll allow me this one nostalgic post: :slight_smile:

I believe the Commodore 64 (and later the Amiga, although I really wanted one, I couldn’t justify as a 12-year-old to my parents that I needed to upgrade, ha, so I never quite got one) was the perfect system (for what it was trying to do) that came out at the perfect time in computer history.

My parents got me a Commodore 64 when I was 11 years old because we all thought (me included!) that I was going to play cool games on it - kind of like a kid with a Nintendo Switch nowadays. Arcades were still going strong and the thought of playing them for free at home (no quarters!), although with reduced graphics quality compared to their coin-op originals (which I learned to live with I guess) was too good to pass up.

What happened though after I unboxed it and switched it on for the first time, was unexpected - to me, to my parents, and I believe to some extent, even Commodore themselves. It showed that famous blue screen, some tech words that made no sense to me about system RAM and bytes (oh, maybe that’s why my new game system is called Commodore ‘64’?) the word ‘Ready’ and a blinking cursor. It didn’t take long for me to look at the instruction manual and try typing (at Commodore’s gentle suggestion) 10 PRINT “HELLO” and see that my new game system just printed out something! I later realized I could add a second line, 20 GOTO 10. It started streaming a bunch of HELLOs until I mercifully stopped the madness, bwhaa ha ha. In that moment I felt like someone had given me a superpower over the computer!

Thus began my long and enjoyable programming journey. Sure I played lots of games on that Commodore 64 (some of which were mysteriously given to me for free on a floppy disk from some kid at school, sshh… don’t tell anyone! lol ). But I would say that I spent an equal amount of time tinkering with BASIC on that blue startup screen. I even typed/copied whole programs that were found in the pages of Compute magazine, although I had no idea what I was typing/copying. But amazingly, simple graphics and game input would suddenly appear. I was hooked for life!

I think that part of the ‘lighting in a bottle’ that Commodore found in the 80’s was by advertising and appealing to kids based on the recent popularity of arcade games - but their stroke of genius was to include the whole system inside a physical hardware keyboard case. In other words, as a kid you got a typewriter with your new game console, whether you wanted one or not! If they hadn’t packed everything inside a keyboard case, I probably would have not gotten into programming at all. I initially thought I just wanted a game system at home: but in the process of having to use the supplied hardware keyboard, I realized that I liked coding too!

We might sound nostalgic, but like you guys I also truly miss those days of being fascinated by your home computer and being a little in awe of it and even scared of it as a kid (dare I type RUN?, … holds breath). I kind of feel sorry for young kids today as, through no fault of their own, they are inundated with technology; technology that is instant, readily available, and unfortunately, easy to take for granted. Sometimes I wish that modern game consoles came packaged inside a keyboard, and when you turned it on, it would display a split screen - if you select the left side, you go to the usual console game selection menus, but if you select the right side (which looks like the Ready screen with blinking cursor) you entered ‘Coding fun’ mode or ‘Discovery’ mode (or some similar cleverly-named mode) and it would be just a BASIC compiler/interpreter to play around with (I’m pretty sure that a modern console could afford the additional 64K or 128K of RAM, ha!).

How many possibly talented game/graphics low-level systems developers have we missed out on over the years because modern game systems don’t come with a keyboard and a screen to play around on and experiment on? I know it’s probably not financially viable to actually sell thousands of units that are packaged this way, but it’s interesting to imagine.

Ok, that’s enough reminiscing about the old days. I actually have made progress in the gltf/glb loading effort - which will be detailed in the next post (I promise to get back on topic! lol). Thanks for going down memory lane and dreaming a little with me! :slight_smile:

3 Likes

To all,
More steps have been taken towards the goal of path tracing triangle models, yay! I believe that I now am correctly loading in and reading the unIndexed vertex position data. If you look at the Babylon PathTracing Renderer GitHub repo, you’ll notice I added some standard / some historically famous, test models in the ‘models’ folder. The historic test ones are in .glb format and the more modern standard webgl ones are in .gltf format. I suppose in the future we could add support for .obj and their accompanying .mtl files (or any other format, theoretically), but for now I would like to just narrow the focus to the gltf/glb format, as this the preferred choice (I think?) for working with modern webgl-related systems and webpages.

If you look at the browser console while running the demo , you’ll hopefully see the debug log output that I added to have some sort of confirmation that the model is indeed loading correctly and that it is being converted to a lengthy float32array list of every single vertex x,y,z position component for the entire gltf/glb file. As a reminder, the actual path traced scene still shows the 2 spheres in Cornell Box scene because we’re not yet at the stage where we can start tracing the model’s BVH structure and triangles, but we’re getting closer with each of these small steps, I promise! :smiley:

If you clone the repo and try it out on your own computer, you can experiment with loading different models by simply commenting/uncommenting the lines where it reads:
modelNameAndExtension = “StanfordBunny.glb”;
//modelNameAndExtension = “UtahTeapot.glb”;
//modelNameAndExtension = “StanfordDragon.glb”;
//modelNameAndExtension = “Duck.gltf”;
//modelNameAndExtension = “DamagedHelmet.gltf”;

Thanks again to @JohnK and @Evgeni_Popov for the help with indexed/unindexed conversions. It really helped me move forward with all this. Interestingly, the models with a .glb file extension do not have indexed geometry, while the ones with a .gltf file extension do: at least from what I’ve experienced so far with this small collection of testing models. Therefore, I had to add in a couple of lines of code that checks for indices in the model file, and if it detects them, it removes them. Please take a look at my loading code as I don’t know if this is the best way to load in a model as well as detecting if the file has indices for the geometry.

If all this looks reasonable, the next step is using these float32array x,y,z vertex locations to build up a tight AABB around each triangle. Once this is done, I can simply feed it to the BVH_Builder, which unless I made a porting error, should be already fully functional. Once that is in place, I will add the ray-BVH traversal and triangle intersection code to the ongoing path tracing shader library.

Will return with more updates hopefully soon!

7 Likes

Success! :smiley:

glTF/glb models are now able to be quickly loaded, their geometry parsed and converted to a BVH-builder-friendly representation, a BVH tree built around the model on the CPU side, the BVH tree stored to a GPU DataTexture, then that BVH data is ray cast against in the real-time path tracing renderer; all this in a matter of seconds!

new glTF/glb model path tracing demo

I took our Babylon renderer an extra step and added a GUI drop-down list model picker (my three.js renderer did not even have this). This menu allows the end user to easily choose which model they want to load and have path traced. At first I wasn’t sure what would happen if I all-of-a-sudden interupted the render loop to go and load a new model, convert the triangles, build all the AABB boxes, build the BVH, and then send all of this over to the GPU - but lo and behold, it works seamlessly! When you switch between models, it only stalls for a couple of seconds (understandably!), and then continues merrily path tracing against the updated scene as if nothing has happened. I was delightfully surprised by this new feature actually working!

You may notice that just the .glb iconic classic ray tracing models (teapot, bunny, dragon) are available at the moment. I plan to add support for glTF files that contain material data and textures (like Duck.gltf and DamagedHelmet.gltf). But this is farther down the line, as I have to port my material loading/processing scheme from the three.js renderer over to our Babylon renderer. I’m confident I will get the basics working - it’s just a matter of time. Also, I might ask for help with processing materials and assigning those to individual triangles (with vertices in an unIndexed flat array) as they are loaded from glTF files. This may require some more finesse than I have used in the past with the three.js renderer.

But the good news is: we are now path tracing triangular glTF/glb models in real time inside the browser with the Babylon.js framework! :smiley:

p.s. try switching the RSphere material, heh heh :wink:
Enjoy!

16 Likes

Woot! this is impressive!

1 Like

Very impressive work! :slight_smile:

Couple of questions came to mind:

  • What is the gltf scene size limitation at this point?
  • About raytracing: Why doesn’t transparent material produce caustic-effects? (I don’t know much about the topic but I am expecting to see the caustics naturally somehow) Edit: Or maybe it does but for example the dragon model is not on the floor so its not observable? :thinking: Edit2: The caustics are there of course! :slight_smile: Even observable from the ball models but it takes a certain camera angle and I have to wait a longer while for it to render. :+1:
1 Like

Hello @JSa

Thank you for the compliment! About the model size question, I have tested the js CPU-side BVH builder and real time GPU BVH/triangle intersection code inside the path tracing shader up to a million triangles. If you take a look at my three.js version github repo, the BVH_Large_Terrain demo for instance loads and handles a 730,000+ poly scene pretty well (30+ fps on my old, underpowered laptop).

A word of warning however: since the BVH data texture is a fixed size on the GPU at startup time, the current 2048x2048 texture will handle up to around 200,000 polys, but if you need more than that, say 200,000 to 1,000,000+ triangles, then the shader texture resolution defines and the js CPU-side data texture creation functions need to up the texture resolution to 4096x4096, in order to comfortably have enough room to hold all of that data. I suppose we could try 8192x8192 if we were brave, but I was a little wary of that since that is too big for some devices to handle (and WebGL), unless you’re on a powerful, newer desktop.

And now the disclaimers, ha: since no system is perfect, especially when dealing with WebGL and the browser, you will notice that if you fly the camera right up close to a model, the frame rate will drop significantly. Although my shader BVH intersection routine is very fast, it does suffer from GPU thread divergence when each pixel’s ray is accessing and traversing a completely different part of the BVH structure all at the same time. When the model takes up less of the screen space, the BVH raycaster does really, really well. It’s a tradeoff, like anything else in real time graphics. I don’t yet know of a more sophisticated or efficient BVH raycaster out there that I could replace the current one with, even if I wanted to. But for the most part, it does the job nicely and keeps the framerate interactive.

Regarding the caustics, yes you are correct for pointing out the missing bright hot spots on the floor and walls when the material of the glTF model is set to transparent. This is because I had the BVH set to cull back facing triangles to help speed things up, exactly like we do in traditional rasterization engines. If you take a look at the PathTracingCommon.js file however, you’ll notice there are 2 BVH triangle intersection routines: a single-sided and a double-sided triangle raycasting function. In order to get photorealistic caustics, we need to use the double-sided triangle intersection function, which is more physically accurate, but also less efficient for realtime pathtracing.

I will soon add a GUI menu picker to allow selection between single-sided and double-sided raycasting, when dealing with transparent materials. An additional note: only when raytracing transparent materials do we need the extra double-sided triangle routine. This is because if it was metal, diffuse, or clearCoat diffuse, the model would hopefully be originally created/defined in a way where the viewer should not be allowed to see the inside. Another word for this I think is the model having a ‘watertight’ geometry: no open, dangling triangle faces sticking out to possibly expose a triangle’s back face. You can also think of it like the camera and thus viewing rays should not be allowed to enter the inside of a solid metal, diffuse, or clearCoat diffuse object. Theoretically, it would be pitch-black inside a properly constructed model, and rays that somehow did penetrate the object would not contribute any value to that pixel’s color. This allows us to safely use the faster single-sided outward-facing-only triangle routine on most scenes.

Hopefully this clears up some issues. As I mentioned previously: no BVH ray casting system is perfect (or a silver bullet for all cases), and mine is definitely far from perfect! There are a lot of tradeoffs like CPU build speed for the BVH, GPU data texture size, GPU raycasting speed, etc. that must be considered. So far this is the best blend and accepted tradeoffs that I have discovered. But if I see something faster or more efficient, I am always open to trying something new! :smiley:

2 Likes

To all,

As a followup to my recent reply to JSa, I think this will be a good spot to list the areas of the BVH/triangle model ray tracing system that still need to be fixed or improved.

  • Currently only 1 glTF model or 1 scene can be loaded and path traced on each run. If you need multiple separate models placed throughout the same scene in the near future, it is recommended that you first place them all into the same glTF file with a content creation package export, essentially placing them all in the same stationary glTF scene, and then when loading, our system should, in theory, merge all of the various triangles into a huge list, with their material data intact, and be able to build and trace the large resulting BVH around all of those arbitrary triangles.

  • On that note, it would be nice to follow NVIDIA’S lead and have an individual BVH for each separate model (called ‘bottom-level’ BVHs, exactly like the one we have working here currently) that go as deep as they need to in order to get down to the individual triangle level, but then construct a large over-arching ‘top-level’ BVH over the entire scene that only has access to the generalized large bounding boxes around each separate model in the scene. So the top-level BVH is very shallow and wide as it can only go as deep as the whole-models level. If a ray is traversing the shallow top-level BVH and hits a large bounding box of an entire model, then it is granted access (through an integer id pointer) to the respective bottom-level BVH (like we have in place now), in order to get to each triangle. I currently have this system in place in my Sentinel game pathtraced fan remake. This allows the individual models to even move around dynamically, rotate, scale, translate, whatever you want either in a stationary scene or dynamic real time game scene, and the top-level BVH just simply updates all these transforms every frame. It is somewhat experimental still, as I had to use WebGL with all of its quirks, and NVIDIA’S optimized BVH system code is proprietary and actually built-in to their RTX cards, so I don’t know how they make their secret sauce. :slight_smile:

  • For handling materials, I kind of hacked together a handful of lines of code for my three.js renderer that kept track of each triangle’s material assignment as it was encountered in the glTF file. It seems to work for the Damaged Helmet glTF model, which definitely has different material requirements for various triangles in its model structure. When I implement this in the near future for our renderer, maybe some of you can take a look and either confirm what I’m doing is OK, or offer suggestions or even do a PR to show me the better way. When implementing this for the three.js renderer, this was one of my weakest areas - reading triangle data from the mesh, whether it be a Three.Mesh(), or a Babylon.Mesh(). Suggestions are very welcome! :slight_smile:

  • Lastly, our beautiful new BVH system doesn’t work on mobile, and to make matters more frustrating - I don’t know why! I will say this however, on my Samsung S21, the Utah Teapot with 900+ triangles displays just fine - however as soon as I select a more hefty model like the Stanford Bunny (33,000+ triangles) the context is lost and the demo crashes. In the past, as soon as I add BVH and triangle model rendering to one of my projects, mobile immediately stops working. This confounds me because on every other demo of my whole three.js renderer repo and our newer Babylon renderer repo, my phone runs every non-BVH demo at 60-120 FPS no problem. This even beats my laptops and some desktops I’ve used (although they are older). But the flipside is: even my old laptop can build a million triangle scene and BVH and it will actually render, (it might be 2 fps, ha) but it actually works! My latest greatest mobile device won’t even compile a scene with more than 900 triangles.

I brought this issue up to Dr. Eric Haines (now with NVIDIA) when we were emailing briefly about my article proposal for RTG2, and he said it is probably a floating point precision issue for mobile vs. desktop. When he said that, it made sense to me when I later thought on it, because the floating point number accuracy for the tiny AABBs around each triangle has to be very precise in order to work properly (tree branch pruning as we traverse the BVH in the shader). I imagine there might be small fractional number discrepancies between the GPU’s representation of the precise BVH tree and the original BVH tree which was built on the js CPU side during demo startup.

To address the last issue above, in the near future I might try fooling with the accuracy constraints of the BVH builder, even going so far as quantization to whole integers and just scaling everything up (like we had to do for game worlds in the early 1990’s, lol), so that by the time it gets to the GPU data texture and GPU raycasting, it may be able to keep the tree representation intact and not crash. I imagine though that there would be a performance hit on mobile, due to the relaxed building constraints of the BVH tree, but at this point I will take anything just to see a path traced Bunny on my phone! :laughing:

So having said all of this negative stuff, allow me to end this post by saying that I am excited to see how all of these renderer components come together in the near future! Already we have made many improvements to the system, algos, my knowledge, as well as added new features (as compared to my three.js renderer) by porting, working on, and implementing this renderer here at Babylon.js. I am confident that we will have a really cool piece of software!

5 Likes

It’s been a little over a year this thread (and project) started now. I discovered it today, read it all and it really made my evening !
I’m rather the newbie developer, still studying and all, but let me tell you I’m BLOWN AWAY by this. Your work (everyone involved and especially @erichlof ), but also the energy and skills invested in this, it really comforts me in the decision I took to start coding.
Thanks a lot for the inspiration !

4 Likes

Improvements and new features added to the glTF/glb model path tracing demo!

Improved/New Features glTF Model Demo

Now when ‘Transparent’ is selected as the desired model material, the path tracing ray caster instantly switches from single back-face-culled triangle mode to double-sided triangle mode, giving photorealistic surface detail as well as more physically correct caustics on the walls/floor of the Cornell box. Remember that this does come at a cost to FPS, usually when the model takes up a large portion of your screen, especially if you fly the camera directly through the interior of the model. The good news is that if you recall, we only need this more expensive double-sided triangle mode for Transparent model materials - all others can use the more efficient single sided backface-culled triangle mode, because the user should not be able to penetrate a solid object with the camera or any rays that are randomly bouncing around the scene.

Also I added new model transform controls to the GUI menu. Now when you use the sliders under the Model_Transform folder, the glTF/glb model and all of its thousands of triangles and AABBs will translate, rotate, and stretch/scale, all in real time! One of ray tracing’s greatest abilities of all time is to apply the inverse transform to the rays, thus making the model perform the desired transformation, without loss of efficiency while raycasting the changed geometry.

I had a lot of fun this evening stretching, squashing, blowing up, rotating, etc. the Bunny model (poor rabbit, but hopefully he doesn’t mind too much, ha!). :grinning_face_with_smiling_eyes:

6 Likes

Hi Eric,

Again the progress are so cool to read and see! This is really fun to follow your work.

Some thoughts came to my mind: Did you check the. GLTF loading library of BabylonJS available here?
For instance you will find some stuff concerning the way BabylonJS manage and load PBRMaterial in GLTF Files: Babylon.js/glTFLoader.ts at master ¡ BabylonJS/Babylon.js ¡ GitHub
Not sure it will match what you do in this new raytracing engine but I hope it can help.

If I remember right, @bghgary is the GLTF expert of BabylonJS so you can ping him if needed on that subject.

Cheers, Pichou

1 Like

Hello @PichouPichou !

Thank you for the link and the tip about asking the resident glTF expert here at Babylon.js. I imagine that I will be asking for help at some point, ha!

Things are going well so far. I’m ‘dipping my toe in the glTF loader waters’, so to speak, making small steps forward. As you recently have seen, we have the geometry working pretty well, now I am currently trying to load in arbitrary glTF textures that might or might not be included with each model, like albedoTexture, bumpTexture, etc…, and use them inside our pathtracing shader.

I had an ‘ok’ system with the older three.js renderer- and I am thankfully finding the corresponding functions and variables that Babylon.js uses to do the similar actions when loading, converting, and tracing glTF models. But like I mentioned, I’ll probably need some help, or at least a 2nd pair of eyes on my code to make sure I’m doing everything correctly.

I’m almost ready for the next update that actually uses the glTF textures and materials on the Duck and Damaged Helmet models for example. Hopefully soon! Thanks again for your support! :slight_smile:

2 Likes

Hello everyone!

I’m happy to report that glTF PBR materials are now working (at least initially), yay!
Check out the real time path tracing of the well-known glTF Duck model and the awesome Damaged Helmet glTF model:

glTF models, initial support for PBR materials

I was able to port some of my loading/logic code from my three.js renderer over to the Babylon.js renderer, in order to correctly load and handle glTF PBR materials. In an effort to make our system more robust and flexible, I also added new code and algos to both the setup js file as well as its accompanying glsl path tracing shader. Our renderer now is able to first load in an arbitrary glTF/glb file, then determine if it has any included PBR materials, then send these possible textures over to the GPU so that the path tracer can use the additional info when casting camera rays into the scene and bouncing those rays around.

Currently our renderer is able to load in and handle the following PBR textures that might or might not be included with each glTF model:
albedoTexture/a.k.a. diffuseMap (most common, almost required for model’s base color)
bumpTexture/a.k.a. normalMap (very common)
metallicTexture/a.k.a. metallicRoughnessMap (fairly common, might come in different rgba configurations - TODO: need to handle all cases)
emissiveTexture/a.k.a. emissiveMap (less common, but must include support)

Notice that ambientTextures/a.k.a. ambientOcclusionMaps a.k.a. AO_maps are not listed above - this is because we automatically get much more physically-accurate ambient occlusion in our render, for free! It is a by-product of using the ray tracing approach rather than the traditional rasterization approach.

Although the initial results look correct, I might need help with making this glTF loading/handling more robust. For example, the Damaged Helmet has a metallicRoughness texture included with its glTF file. The ‘metalness’ is on the .b blue channel, while the roughness is on the .g green channel of the included texture. I imagine that this kind of texture could arrive to our loader in all sorts of configurations using the various rgba channels available to the original model content creators. It would help a lot if you guys could take a look at how I am loading and handling the various textures that might or might not be included in each arbitrary glTF model.

Also, curiously, the glb models (Teapot, Bunny, Dragon) seem to be defined in a Right-Handed coordinate system, and as a result, need slight changes to the vertex positions and normals, specifically - negating the z components. Otherwise the model’s triangles come in with incorrect windings (I’m just speculating) and the model renders black and you can see just the inside of the model. Also equally interesting is that if this negation is done on the 2 glTF models, Duck and Damaged Helmet, the same artifacts occur! Therefore, I have a boolean check in order to Not apply the negation of the Z components, and everything works perfectly. Maybe the glTF loader is able to figure out the coordinate system that was used to define the glTF model in the first place? Again, just speculation on my end.

But the good news is that we are now able to load in a glTF model, determine if there are any of the relevant PBR textures included, and then send them over to the GPU and handle the ray casting/interactions with those materials. And the best part is, all of this is loaded in a few seconds, and then path traced in the browser, in real time! :smiley:

7 Likes

Hi again everybody,

In order to give some of the more experienced Babylon/glTF users time to look over my glTF loading and material handling code, I might try to implement some additional side features. If it’s ok with everyone else, I thought I might implement a physical sky model with time of day, as well as support for HDRI (rgbe format) environment loading. Although not directly related to our recent glTF efforts, I believe that a physically-based sky or an HDRI as the background for pathtracing would bring even more realism to the scenes, especially those that feature glTF models.

Of course in the meantime, if any of you notice anything out of place with my glTF handling code, or anything that could be done better, or additional features/parameter-handling that should be added, please let me know. I will do my best to make the glTF handling as useful (and the code as clear) as possible for the end users.

-Erich

2 Likes

Hi @erichlof,

Yeah awesome to see the Helmet in your pathracing engine.
I am not an expert with GLTF but I think you will find helpful stuff in what BabylonJS already does.

Concerning the Right/Left handed system: From what we can see here, babylonJS use a Left handed system by default, and gltf is base on Right handed system so in order to make the gltf model compliant with BabylonJS we put a negative scale on the Z axis.

Then concerning the metallicRoughnessTexture: As we can see here we indeed have the metalness from blue and the roughness from green. I think these parameters are not supposed to move as long as your model follows the GLTF2.0 spec, right @bghgary ?

Finally concerning other textures: you will find how every one of them (bump, occlusion, emissive) are handled by babylonJS here

=> Hope all of that we help you to manage GLTF Textures!

Indeed I think the logical next step would be to support HDRI environment. Then the rendering quality will be :exploding_head:

Speaking of quality, do you think you could add an input to manage the scaling ratio of the rendering? It will allow us to really appreciate the power brought by path tracing!

Cheers, Pichou

2 Likes