b) es6 imports are pretty long and kinda different from usual, not sure if its worth fixing
import { FrameGraphCascadedShadowGeneratorTask } from "@babylonjs/core/FrameGraph/Tasks/Rendering/csmShadowGeneratorTask.js";
compared to
import { DirectionalLight } from "@babylonjs/core/Lights/directionalLight.js";
c) objectList in the tasks needs input of FrameGraphObjectList type. An incorrect assignment objectList = [sphere]; does not throw a warning, instead log throws mesh undefined in _checkReadiness blah blah which doesn’t help.
d) In the case where a frame graph replaces scene render loop, there is no support for user interactions, ie, keyboard pointers etc. I understand only a few observables are supported but I can’t envision a modern scenario where a user only needs to see and not interact in some way/fashion. In any case, my testing bottlenecked due to this.
FrameGraphCascadedShadowGeneratorTask is probably the longest name you’ll find in the frame graph framework, but FrameGraphCSMTask wouldn’t have been very appropriate, since the generator is called CascadedShadowGenerator. I don’t think names like this are a problem, though (?)
You probably test in Javascript and not Typescript? With Typescript, you should have an error if you don’t use the right type. In Javascript, you can pass anything you want to any function, it’s not specific to frame graph.
Setting scene.cameraToUseForPointers = yourCamera should fix it (see Babylon.js docs). If not, can you provide a PG so that I can have a look?
Not a huge biggie, just that it wasn’t as intuitive. The local npm package didn’t have the js file with the same name, I had to do some educated guessing.
I see, the problem is against gpu picker not against the pointer obs. hmm…
Also, another feedback. If its not too much, pls demonstrate an example against customRenderTargets for frame graph? I was trying to test the water shader against frame graph but can’t seem to pipe the refraction texture from the fg to the shader material. Logs complaining about textures[1].isReady or some such?
The refraction texture you generate should be connected to the dependencies input of the ObjectRenderer task which is going to use the texture. Then, on javascript side, use nodeRenderGraph.onBuildObservable to retrieve the texture and set it as the refraction texture for your material (line 146+ in the PG of the example above).
I’m not sure if its my gpu only (GeForce RTX 4060). Using the example pg, with the inspector open I aggressively resized the PG many times (drag middle separator). Occasionally, my screen goes white. Now that I’ve managed to get it once, its fairly consistent to repro. ss below.
It affects all other PG tabs. Reload solves the problem. It seems like rebuilding the frame graph crashes the context? I’m not sure how serious this could be in userland but I do not recall this happening with scene.render? Also, it would help if someone else can repro on their device.
The geometryViewDepthTexture of the FrameGraphGeometryRendererTask/nrge node is supposed to generate a depth from view. Looking in the inspector, all I see are flat red material used for rendering the texture, there does not seem to be any depth info even when I create many meshes into the distance. Is this wai? I’ve tried the geometryScreenDepthTexture as well, its better but still not correct. The far bg is still all red. The best I got is to apply a post with a circle of confusion but it doesn’t seem right.
How should I get the equivalent of the depthRenderer's depthMap from the frame graph?
I can reproduce (only in WebGL), I’ve added it to my todo list.
Yes. The depth in the geometry view depth map is the Z coordinate in camera view space, which ranges from near to far plane (camera minZ/maxZ). So, most values will be > 1, meaning the texture will appear full red in the inspector. Regarding the geometry screen depth map, far bg being red is expected, as screen depth is between 0 and 1, 1 corresponding to the far clip plane.
Here’s an example of using the geometry screen depth map:
There is currently no equivalent to the default depth map, because we don’t typically use the value “as-is” and because linearized view depth is not generated by the geometry/pre-pass renderer, which the geometry renderer task aims to replace.
Do you have a use case where the linearized view depth value is used without being converted back to view depth or screen depth?
Its fine, now that I know how the geometryViewDepthTexture works, I can modify my shader. Case in point, I think I have ported the stylized water shader to frame graph version successfully. It was using depthRenderer's depthMap in the default, with the geometryViewDepthTexture, I simply normalize it in the shader.
Now if you look at the scene perf with the inspector, the fg’s does not have activeMeshes. Even if I set alwaysSelectAsActiveMesh = true;. Is this wai?
Yes, a number of statistics of the inspector don’t yet work with frame graphs (active meshes, some counters under “Frame Steps Duration”), it’s something on my todo list.
An existing csm cannot co-exist with frame graph csm, console throws Error: Active camera not set. In v8.10.0, both csm and csmTask works together altho the csmTask will override csm. wai?
You need to pass the camera to the CSM builder to fix the error (but there is no need to create a CSM manually when using a scene-level frame graph, as it will never be used / the shadowmap will never be generated):
In any case, I will create another PR in the next few days to improve the handling of autoCalcDepthBounds, which will allow this error to be removed without modifying the constructor.
Testing instances vs frame graph. When passing an instance into the object list, both the root mesh and instance show in the color texture. I presume there should be a way to exclude the root mesh? Also, same question could be extended to thin instances, ie, how to render a specific thin instance to a texture for frame graph?
edit: I forgot to ask. Is there a workflow for the case of custom effect renderers? can I create custom frame graph wrappers/classes for interfacing into the scene frame graph? or will the build process complain?
This PR will fix the bug (the root mesh should not be displayed if it’s not in the mesh list):
Thin instances are part of a mesh: if a mesh is in a frame graph object list, its thin instances will be displayed. It’s not specific to frame graphs, it’s the same thing if you add a mesh with thin instances to a RTT.renderList: all thin instances of the mesh will be displayed in the RTT.
You can create frame graph tasks yourself, that you can add to a frame graph. You can take model on existing tasks (under FrameGraph/Tasks/) to write your own tasks, and I plan to make an example to demonstrate how to write a custom task / node render graph.