I can use something like python on the backend using Blender Cycles, but then I have to charge users for such service because of high GPU consumption.
For non-professional users, I’d like to offer similar experience, but for free, using their own browser service workers for baking realistic scene snapshots (ideally with results up to parr with Blender Cycles).
I don’t need super realistic baking, like this video demonstrates, it’s possible to render something considerably realistic in 2 minutes via backend. Can you guess what technology they use? (on their site they said it’s proprietary, but I doubt it).
For frontend end, if you use service workers they make use of other CPU threads and do not intervene with user’s browsing thread.
And yes, I also only have a static scene.
And what about using Babylon? Can Babylon unfold lightmap UV and bake texture?
I was indeed the maker of the raytracing renders from Kozikaza. I suggest you use an already made raytracing engine like VRay or Cycles.
If your project is funded I can possibly help you with that, shoot me a DM.
@sebavan, does this mean Babylon currently cannot create lightmap and shadowmap from a scene?
I may get away with texture backing, by combining light + shadow map into global Texture atlas dynamically in Babylon. Then use two UV sets with tiled Textures.
My scene is well optimised, so it can afford expensive light, reflection and shadows.
@ecoin
Hello, I have been asked about this very topic on the Babylon.js path tracing project forum thread as well as my own older three.js path tracer project. For now unfortunately, I have to agree with @CraigFeldspar and also suggest that you use an existing rendering package and tool set like Cycles (or VRay, or something similar).
That’s not to say that three.js or Babylon.js aren’t capable someday to do lightmap baking all in the browser exclusively - it’s just that there are so many moving parts in the baking toolchain/pipeline, that it is difficult (as of today) to somehow cobble all of these disparate pieces together into an efficient solution.
As @Vinc3r states, there is a list of things that have to work in tandem in order for the result to come out correct and professional-looking. Personally speaking, I am fine with the 1st item on that list (the ray tracing engine), and have mainly focused on writing ray tracers and path tracers in the browser for the last 7 years. So I can confidently say that yes, the 1st part of the tool chain for baking is well within the reach of the browser, especially with WebGL2 now, and WebGPU on the horizon.
But since I have narrowly focused on this aspect, I have necessarily neglected the other equaly-important parts of the baking toolchain, namely the texture handling and UV unfolding into texture atlases. I honestly don’t know where to begin with that side of the problem - it would appear to an outsider that I am a texture graphics noob, even though I am comfortable with the ray tracing side of the problem, ha ha.
I’m hoping that in the future we can somehow take a ray tracer (like mine, but it doesn’t have to be my code) and marry it with a texture and UV unfolding tool that was built with the browser in mind. Also maybe have some kind of storage solution to save your progress in case the rendering/baking process gets interrupted, or the user gets kicked offline. There’s just a lot of things to think about before we can have a total baking package inside a WebGL library like three.js or Babylon.js.
I hope you are able to find a decent solution that keeps your users happy. Best of luck!
Thank you everyone for amazing links, I’m swallowing this topic like a hungry monster
I admit I’m a total noob to this whole lightmap, baking, raytracing think, but the other day I saw UE5 Brushify demo, where the whole terrain can be interacted with (rocks created proccedurally), and everything (including global illumination) is computed in real time.
The creator used Voxel plugin, instead of raytracing, because he said ray tracing is too slow.
So the question is, is ray tracing the right thing to use? or are there better aproaches?
@erichlof, I’ve checked your amazing work on path tracing and stunning demos, definitely will test it out first because it could be sufficient enough for now.
In the app, I have all diffused/albedo Textures with UVs generated automatically inside Babylon, and only glTF mesh data is binary, so it should work.
While this path tracing is not fast enough for real time VR experience, I believe it’s fast enough to create static snapshots, which I can turn into environment Texture Atlas.
Then use Babylon Probes to capture path traced environment to derive the final backed Texture for each Mesh. It’s a lot of steps, but doable, IMHO.
@ecoin
Thank you for the kind words! To answer your question 2 posts ago, yes Ray Tracing (Path Tracing) is what you should use when doing the first stage of baking lightmaps. At least that’s what I’ve seen on almost all baking systems. Now your link about that Voxel plugin is really interesting and the creator(s) of that plugin have done an amazing job with voxels, but I don’t know how standard this would be for using on lightmap baking, or how much help you could receive if you went down that road and hit a roadblock. The ray tracing approach has been around for decades, but the voxel baking idea sounds pretty young in comparison. I’m not saying it’s not possible, just that it would be a non-traditional approach.
Personally speaking, voxels (and also point-cloud type data) is a very interesting area of research that I would like to explore someday. However, up until this point, my main experience has been with triangulated models and traditional raycasting against their triangle primitives and then raytracing(or path tracing) those cast rays to get the realistic lighting effects. An amazing project that Has gone down the raytracing/voxel road for rendering is Magicavoxel. Even though this voxel renderer is really awesome, I’m not sure if there would be much benefit (quality or speedup) from using this over the traditional ray/triangle pathtracing pipeline for baking. Would be an interesting experiment though!
I think your plan in your most recent post is a sound one. Depending on your end-user experience though, I’m not so sure about needing Babylon Probes. Let’s say you successfully bake the lighting down to a large texture atlas (or individual lightmaps that match the dimensions of each model’s diffuse albedo texture) - then I think you can just apply the lightmaps and render the scene inside Babylon and you’re done. If the scene is truly static, you could conceivably hook the whole scene up to a trackball style viewer where the user could rotate the entire scene with their mouse(or swiping gesture on mobile), or you could give a 1st person experience of letting the user ‘mouselook’ and fly or ‘walk’ their camera through the scene that already has all the cool lighting info baked into lightmaps (a UE4 architectural walkthrough visualization comes to mind).
However, if you have movable characters (like in a game), yes you would need some kind of probes to ‘re-capture’ the surrounding lighting (that was initially pre-baked and saved into your lightmaps) in order to update the moving character model’s own bounced colors/lighting from its immediate environment as it moves through the scene. I may be wrong, but the first instance of this technique goes all the way back to id software’s Quake 2. (or was it Quake 1 even?).
Again, when this baking process leaves the ray tracing stage of the pipeline, that’s where my knowledge starts to fail (lol) and I would have to rely on someone else or someone else’s tools to complete this complex process. I think this is a reason that Blender Cycles and VRay have entire teams of people working on the renderers with its various stages, each with its own unique set of skill-requirements and expertise.
Do keep us updated on which path you decide to take!
-Erich
In the last video (7/7) @ ~4:15 they show their denoiser, after doing only 1 sample per pixel.
for real time, there was a gdc talk i watched that basically said they diff the previous frames’ pixels , then use the diffs to keep a kind of moving average on the rate of change to quickly identify areas that will need to be sampled again.
Anyway, it seems like a webgpu denoiser is the thing to look into.
After a lot research around the subject, I concluded, that for now, the best approach is to follow the pipeline you described in the tutorial.
My question is, can you estimate the average time it takes to bake one 4k lightmap Texture Atlas (without any of the PBR textures, just pure light + shadow Direct+Diffuse) of a scene with roughly 100 meshes, 10 light sources (each can have different temperature) on a single Intel/AMD 2Ghz CPU without GPU using Blender Cycles X? (estimated at 20 sample passes with Blender’s Denoiser AI).
This will run in a headless Linux server, so performance is usually better than on user desktop with the same specs.
It’s very scene dependant, so I can’t really guess. If you really use only 20 samples, it should be a matter of minutes (20 seems very low for baking render, you’ll quickly see if it’s enough).
In my last baked scenes, time needed for render took me between 30 min and 1h30 per textures in 2k, using something like 450/500 samples (but it was not under v3.0 with denoising available).
It’s not perfect, but good enough. I reckon for baking only light I can easily reduce it to 20 samples because I want to make this feature accessible to everyone by making it 100% FREE!
For pros that want top notch quality, obviously they can pay for the upgrade, since it’s not sustainable to offer expensive GPU servers for free.