Wanted to see if anyone else is interested in this topic. We do a lot of product rendering and we’d like to look into offline solutions to render stills within the same universe as BabylonJS. im not talking about simply capturing high rez stills (there are other threads on that topic), im taking about engine level changes to really enhance the realism, optimized for realism not framerate, without having to leave the babylon environemnt .
It would be cool to get some GPU path tracing, importance sampling, better AO/reflections, reflection occlusion, physically based cameras, AI denoising… and all that jazz for starters.Ive been looking into Nvidia IRAY. I wonder if it would easier to just write an IRay importer plugin for .babylon scenes and render with IRay API but using babylonjs as the lookdevtool. Just dreaming and playing with approaches. Again #1 goal is to staying within the Babylon ecosystem and adding IRay to that ecosystem rather than forcing the user to go to a completely different package. Assuming we stick with the PBR standard the look should be similar to BJS with some minor tweaks. Id love to hear thoughts on other ideas, open source projects, other APIs that people have had good luck with. Also thoughts on the challenges and blockers that we might face along the way.
As I know, in the BabylonJS you can’t change rendering engine and of course add ray tracing (probably it will be able in the future with WebGPU supporting, but not now).
I think you can check this pipeline:
Export scene to GLB
Import it into Blender
Blender has different rendering engines - Eevee, Cycles, Workbench…
Rendering in Blender can be runned via command line in headless mode.
what does “without having to leave the babylon environment…” actually mean?
Are you thinking about a 3D DCC app that runs in a browser?
babylon is a real time rendering engine, optimized for framerate. You are literally asking for it to be the opposite of what it is.
If you are dreaming about having offline rendering capabilities on demand while in a browser session, you should probably look into server side rendering as @Dok11 hinted , tools like blender can be run headless via scripting.
Thanks for the response . My intention is not to change the BJS renderer to an offline renderer, or even to suggest that BJS take on the project, but strategies to integrate with other renderers. What i meant by ‘staying in the BJS environment’ is that still do all shader updates, lighting adjustments and other tweaks in bablyon, rather than having to tweak things again in a third party solution and force artists to learn a different workflow. I just need this as a service on the back end to generate higher-quality imagery so in theory yes i can export the whole scnene as one big .gltf and render with blender or some other renderer. Thats an entirely valid solution and I will investigate Blender Headless mode.
Hello, I’m a little late to the party, but I just wanted to throw out some more responses and ideas for you.
First, I think I understand what sort of end solution you’re wanting- namely, a way to stay within the Babylon.js framework and system, and then render your scenes out to a ‘beauty’-render final image, without needing the rendering to be real-time or concerned with framerate (in other words, ‘sit and wait’ while your single-frame final image is being rendered to the finest, most photo-realistic result).
As others have mentioned previously in this thread, this problem has been already solved in the traditional CPU/GPU C++ space. There are several programs to choose from that let you either create a scene from scratch with a built-in editor, or import models/entire scenes in most popular formats (glTF, obj, fbx, etc). Then, most of them have some sort of smaller, low-res quick preview window that may or may not be ray traced (some use quick traditional rasterized real time graphics for this purpose of being interactive). Then you can select, edit, assign and re-assign different physically based materials for all of the components in your scene, even at a per-triangle level. Then maybe you select the lighting you want for your scene. When all this is done, you hit the ‘render’ button and the CPU (or GPU nowadays) ray tracer kicks in and slowly but accurately renders the final image while you go get a coffee (ha).
Although this problem has many solutions in the traditional C++ CPU/GPU (RTX) space, it is not quite there yet in the browser space. If you take a look at my projects for instance - the three.js and Babylon.js path tracing renderers, you’ll hopefully be greeted with demos that will run at 30-60 fps on any device that you might be on, including your cell phone. This is because my inspiration was the older Brigade 1 and 2 path tracing renderers, that tried to bring path tracing into real-time for the first time in history (way before RTX was even a spark in the minds at NVIDIA). I wondered if the same could be done in the browser, which meant it had to be done on all devices with a browser. So, 8+ years later, I am still exploring this space of real-time path tracing on all devices.
However, since ‘performance first’ has always been my guiding force, I have had to make trade offs in final image quality (as in, can be compared with any pro renderer and the image would be indistinguishable from their final renders), and also the robustness of my geometry and material systems, in terms of what it can and can’t load in at start up, and what can and can’t be changed on the fly in a visual editor. In fact, I don’t even have a visual editor, and instead must define the scene ‘by hand’ in the custom shaders themselves. This works well enough for my tiny, focused demos, but is abysmally inadequate for general use by someone who just wants to load in an arbitrary glTF scene, change geometry, change lighting, assign and re-assign materials on the fly visually, and then have the whole thing correctly update and render out.
Having said that, there is another project in the browser space that comes to mind: Garrett Johnson’s Three-GPU-PathTracer.
Now if you compare our systems, you’ll find that Garrett and I have very different goals in mind, and our project structure and coding style/philosophies are also very different. Where as I focus on getting the thing running as fast as possible (on a toaster, lol) and therefore cut corners whenever I can, Garrett’s renderer is more of a traditional renderer that has the features you would expect from a pro rendering package. I believe with his renderer, you can theoretically load in any arbitrary glTF scene (and other formats too), of any complexity in terms of geometry and the variety/amount of physically based materials. In addition to this robust system, Garrett has followed the more rigorous theoretical and mathematically-provable side of rendering, something you might find in the ‘Physically Based Rendering’ book, which is the bible for graphics and game graphics programmers alike. Because correctness and mathematically-provable Monte Carlo path tracing (importance sampling, russian roulette, etc.) results are the primary focus (rather than frame rate), you could probably put Garrett’s final renders against anything in C++ land, and the results would be nearly identical.
Therefore, you might look into a project like his, if you absolutely have to stay in the browser space, from conception to final render. However, I don’t believe Garrett’s renderer has a built-in visual editor - as I mentioned, this problem has not been completely solved in the browser space as of yet. So again, you would need to leave the browser, open up Blender, make your edits visually, then re-import back into his browser-based rendering system. The major downside for your particular case here is that Garrett has yet to port his system over to Babylon. So, you wouldn’t even be able to use Babylon, and instead would be forced to do things in the three.js world, which might not even be an option for you and your team.
I wish we had a complete rendering solution for the browser space, but we don’t yet. With my project, we have good performance that rivals that of C++ path tracers I dare say, but none of the robustness and functionality for the end user (yet - I’m taking small steps in learning how to improve this weakness of my renderer). And with Garrett’s project, we have all the components of a traditional rendering package inside the browser, which is really cool, but you have to have a pretty beefy computer/GPU to keep the frame rate high, and the visual editor is non-existent, which is a deal breaker for most - although Garrett is trying to rectify this - I think he has the ability to switch back and forth between traditional real-time WebGL rendering and then when you let go of the camera and keep it still, it switches over to progressive, high-quality ray tracing rendering mode that you can save out to an image on disk/server. But im not sure where his project is currently with the visual editing situation.
Hopefully in the near future, we will be able to combine all of our strengths in the browser space, and have a free-to-use rendering package that rivals that of Blender Cycles. But as you can imagine, writing a complete system with correct ray tracing (which can take a whole career lifetime if you’re not careful, lol), handling of arbitrary scenes, providing a complete visual editing system/GUI, etc, will require much more effort by multiple people. Hope this clarifies the browser rendering space a little. Best of luck to you - I hope you can find a decent solution that fits your needs!
P.S. Just now thought of a couple more complete browser-based rendering systems:
I think lgltracer even has a visual editor, which is awesome - but again, both are not using Babylon.js, and sadly, both are not open source. In fact, it looks like they are paid-for service/software. But it does show that something like Cycles is possible inside the browser. Now we just need to make a free, open source version!