Here we are my friend:
Using Multiple Scenes | Babylon.js Documentation
Hmmm still not getting the depth sorted out correctly for the particles⌠Feeling pretty desperate about this
Feeling that even though I get this to work Iâll hit some other problem when trying the code in the real game
Ok so here is a 2 scenes version with the particles working:
shader shenanigans | Babylon.js Playground
Now letâs reintroduce the PP to see how it goes
The problem comes from the postprocess:
shader shenanigans | Babylon.js Playground
I need to check why it is messing with the depth buffer
Ok of course that cannot work lol.
The postprocess renders the scene to a TEXTURE so the depth buffer of the scene is empty so when the second scene renders on top of it is empty ![]()
I feel like in your case the PP road is the not one you can use. Applying a PP only to a part of the image is tough as you can see ![]()
I wonder if you can use a different road: Maybe having a specific shader for the mesh that will render the material AND the outline? so you can control it per mesh
This technique will also be the fastest as it will not require a RTT.
Other option (thinking out loud): You can rerender the scene in the depth buffer only for the particle to interact correctly but that will be slower for sure.
I really believe that from a practical and performance stance you want to only update the shader of the outlined mesh
@Evgeni_Popov : I wonder here, if we want to keep this PP approach, any way to use the frame graph to reuse the depth buffer?
@KallkaGo You are not far in your shader honestly. you can use the normal to detect the edges.
Something along these lines:
vec3 viewDir = normalize(vPosition - cameraPosition);
float ndotv = dot(normalize(vNormal), viewDir);
float edgeFactor = 1.0 - abs(ndotv); // Max near silhouette (when ndotv ~ 0)
vec3 color = mix(baseColor, edgeColor, edgeFactor);
Yeah this could be included in my mega shader perhaps, AAND maybe it could be even controlled with some number input value of 1/0 if the outline should be visible. I may try to create this next⌠It really feels like the best option like you said, for code simplicity and model/mesh simplicity (no need to invert either). Just have to prototype it with a simple nme and then try to implement it into the megashaderâŚ
Still a bit interested in this also if there is any way
But it feels we are getting into a bit of hackish territory, which I do not enjoy⌠It might cause some problem down the road with some rendering stuff ![]()
Ok, trying to clarify some stuff!
So are you suggesting this should be two materials or that it can be done in one node material/shader? I think I cannot escape the inverted hull in any case?
Iâve been trying to dig up some Unity tutorials and back in that world they introduce other material/shader to the mesh that does the âanother passâ for inverted hull. I struggle to understand how to do this in BabylonâŚ
I apologize about these long posts, I wish I understood better about these consepts ![]()
No worries mate! We are here to help!
I see two paths:
- With inverted hull and a specific shader just for the hull (maybe juste a solid color): easy and performant: You simply need to clone your mesh, set the clone with the solid color material, render it with sideOrientation inverted and boom (also make sure to set the depth to less not lessequal)
- No inverted hull: add the edge detection directly into the mesh material. Not sure how it will look like but this will be the fastest for sure
Yes, it is easy to reuse the depth/stencil texture in other tasks using a frame graph. However, to use custom post-processing, you will need to write a custom frame graph task. There is no documentation for this yet, but you can find several frame graph tasks in FrameGraph/Tasks as examples. I also plan to write documentation on a non-trivial example, which will create several custom tasks as well as node render graphs (but this may have to wait until I return from vacation).
I processed this solution but it is not âperfectâ as well, since the if the mesh is, for example, submerged a bit to the ground, the edges are buried and cannot be seen. I would need another âpassâ to figure out the lines, like this guy did. Feels like this is getting more complex and maybe not so good on perf side hmmm!
Been thinking about this really hard also, and Iâm having a hard time figuring out how a single shader would accomplish this. Figuring that out might not be the most time efficient for me
(doing a big multiplayer game has bazillion components)
This sounds pretty cool. Would like to see this in action (and also off topic would be cool to see if I can optimize my server render function with this?).
There is also the now official edge post process that does a bit different stuff that I have in my shader, that only compares colors. Could that post process be used as test bench for this, to handle the depth issue? I would really prefer the post process/frame graph route on this if it is possible. But I feel that I lack understanding on the deeper parts of the engine on this matterâŚ
