Does this mean even minor release can be breaking?
That looks sweet! It will make my life a lot easier.
I hope we can use it without the node editor as well. I can see some use cases where the topology of the graph can change dynamically and the node editor would not be suitable in that case
Yes, you can create the frame graph “by hand”:
On the left, the code that generates the output on the right.
Note that even with a node render graph, you can programmatically create / remove / connect / disconnect blocks (as you can do for a node material, for eg).
However, modifying a graph will require to rebuild it. It’s not expected to take too much time, but it’s not free either. That’s why we added a feature that let’s you disable a task from the graph, which doesn’t require a rebuild. Disabling a task may not be free either (it may require to copy the source texture to the destination texture, depending on the task), but it can be useful in some cases.
Looks very flexible indeed Can’t wait to try it!
Yep but we do it very very rarely and only for types
What’s the schedule?
So, with the developing of the Babylon X and merging breaking changes, when would there be the “legacy” or “longterm” bugfix-only branch?
There will not be one:) we will limit the raking changes as minimum (like now actually, we only maintain the current version)
This current upcoming breaking change is only for TS and quite easy to get around (if you need to call getScene() you simply as to then case the result as Scene and you are good)
If we end up having a more massive breaking change we will probably revisit the need for a maintenance branch
As Taoism says: The Tao begets one, one begets two, two begets three, three begets all things, and all things return to one.
Finally, some day in future may be webgl support would be deprecated for webgpu.
Time flies.
Alright folks! Now that the PR has been working out, we realized that the impact will be too big for the framework to be sustainable.
So there will be no CoreScene We will find better way to reduce size without compromising on perf or back compat!
A bit late at the party, but I wanted to mention I’m really excited about this Been doing some hacks to control the rendering process, producing unmaintainable code (even for me after 2 weeks of holidays
). I’ve always been longing for a nice and clean render graph like this one.
IMO that would also push the engine to the next level, where new rendering techniques can be shared easily in combination with NME and playgrounds…
Can’t wait !
Oh I’m really sorry for all the hard work you did and now need to flush it… simply because of this ‘ridiculous’ committment to actually follow the vision and mission of BJS and remain true to your roots, banner
and people
Well, at least it shows once again (if necessary) that it’s no easy task to remain consistent and stick to your grounds. We are the lucky people because we’ve got someone we can trust
. It doesn’t happen all too often these days
Thanks mate! It means a lot to me!
I will get back to it soon with a better version
And here is the new, far simpler solution:
Allow users to provide their own custom rendering function for the scene by deltakosh · Pull Request #15655 · BabylonJS/Babylon.js (github.com)
In a nutshell, the user can now take over the rendering function:
scene.customRenderFunction = () => {
// Clear back buffer
engine.clear(new BABYLON.Color4(0, 0.3, 0.5, 1), true, true);
// Update view and projection matrices based on active camera
scene.updateTransformMatrix();
// Set viewport size
engine.setViewport(camera.viewport);
// Compute world matrix and render
for (let index = 0; index < scene.meshes.length; index++) {
const mesh = scene.meshes[index];
if (!mesh.material) {
continue;
}
mesh.computeWorldMatrix();
if (mesh.material.needAlphaBlending()) {
engine.setAlphaMode(BABYLON.Constants.ALPHA_COMBINE);
} else {
engine.setAlphaMode(BABYLON.Constants.ALPHA_DISABLE);
}
mesh.directRender();
}
}
This will then be the place where the user will be able to leverage the Render Graph to enable complex rendering
Are there any architectural issues with WebGPU?
For example, would something like snapshot rendering work well with this method?
This code will be compatible with everything as long as you add the expected API calls.
The tradeoff here is that we give you all control but you have to do the work.
I’m actually writing a blog about this new option. It should be ready by Thursday EOD
I wonder if there could be a Render Editor like NME.
Getting really serious