A great outline thread/problem

Hi everyone!

I am currently using a cheap custom made outline for my game models, inspired by this Unity tutorial, and you can see it here in the NME.

It looks like this in action and it basically works with just inspecting the color changes instead of actual normals

However I’ve been planning on improving this for a while now. There are couple of problems:

  1. I want to exclude some meshes from the effect
  2. Edges are bit rough

As you can probably guess, the problem nr. 1 is a tough one. I have a separate UI camera right now to ensure that the post processing effect does not effect any UI elements. But today I faced another problem; I wanted to exclude the eyes of the characters, but if I put them into a separate camera with different layer mask, they are above everything else. Yikes.

This led me into a rabbit hole of trying to find a solution for this:

So the final questions where I would very, very much like some solutions with:

  • How to most elegantly filter out meshes from post processing, but also tackle the render indexing problem. Can the new node render graph help with this? Here is one of my biggest gripes, since I had to exclude the phone dials from the main camera to not have the post processing effect, but this causes the render indexing issue:
    phonegui

  • How to improve my current node material post process, any ideas would be welcome! I would very much like to make it more smoother on the edges, for example.

The most controllable way to stroke is to copy a model and set it as the backface, then expand it outward along the normal in the clipping space. On this basis, you can achieve a stroke with the same width as the screen space. The method in Genshin Impact is similar to this.

Ah yes I forgot to mention that I am well aware of this tactic, but it is not viable for me. It adds another workflow to Blender and more performance strain since the meshes are basically duplicated. Also since I cannot fully merge all the meshes into one because of dynamic clothing etc. it will become a real hassle that a solo dev might not be able to do.

Yes, for post-processing strokes, it relies on changes in normals or depth to calculate gradients to guess whether it is an edge. If you consider implementing it this way, you can refer to the approach mentioned in this article.

To make the edges smoother, you can consider using blur to reduce the jagged feeling.

article:https://www.vertexfragment.com/ramblings/unity-postprocessing-sobel-outline/

1 Like

Thank you for the article, I ran into the sobel operator in my investigation as well and this explains it nicely! :star_struck:

Now, I think the biggest issue is the exclusion of meshes… I would really like to find out an elegant and nice solution for Babylon, that works with webgl/webgpu on this one…

The article gives input on this problem as well:

Excluding from the Outline
Occasionally you may need to exclude geometry from the outline effect. In Realms, we exclude both water and grass from the outline but approach it in different ways.
For forward effects, we make use of a custom OutlineOcclusionCamera which can be configured to render certain geometry. This camera writes the depth values of the selected geometries to a _OcclusionDepthMap which is provided to the Sobel shader. The implementation of this camera is demonstrated in the GitHub repository.
For deferred effects, such as the grass shader, we set a signal flag in the form of setting the .w of the normal vector to 0.0, which we then interpret in the Sobel shader as meaning “ignore this fragment.” It is crude, but gets the job done.

I think this kind of functionality would be doable with Babylon as well, but it seems a bit hassle to me!

1 Like

You just need to render the objects that don’t need outlining to a rendertarget and pass it into your outline pass. Use the UV calculated by the screen coordinates to sample the texture and do a sobel operation. You can get a threshold value and use this threshold value to determine whether to return the outline color or the scene texture. :grinning_face:

2 Likes

You could have used the stencil texture, if WebGL supported reading from it… I think the solution from @KallkaGo should work, though.

2 Likes

Argh I am so close now but for some reason the texture is not propagating to the post process properly?

Anyone has any idea why?:
shader shenanigans | Babylon.js Playground

I keep banging my head on this:


It seems that the texture is just clear color for that second mesh that should not share the post processing effect and just pass the texture through… In the NME I do have initial 1px * 1px transparent png:s set, so the webgpu does not whine about those… Anyone got a clue? :frowning: Am I passing the texture to the post process wrong somehow?

You must not set the textures on the effect, but on the node blocks itself:

        getTextureBlock(ppMaterial, "depth").texture = depthMap;
        getTextureBlock(ppMaterial, "noPPDepth").texture = noPPDepthMap;
        getTextureBlock(ppMaterial, "noPP").texture = renderTarget;

and not:

        effect.setTexture("noPPTexture", renderTarget);
        effect.setTexture("depthTexture", depthMap);
        effect.setTexture("noPPDepthTexture", noPPDepthMap);

I’m not sure it fixes all your problems, but at least the textures are passed correctly, as you can see in the NME:

Before:


After:

Also, I think the uv input of the noPP block should be connected to UV scale, not screen.position?

The PG with these changes (I also added an observer on engine.onResize to resize the textures when we open the inspector):

2 Likes

Thank you for the fixes and looking into it!

However that does not seem to fix the ultimate issue of passing the actual EdgeRTT texture to the post process. It is correctly displayed here:

And fun fact if you “load texture from file” it seems to work perfectly with any file, for example with this John Cena meme picture:

Ok, the problem is that the noPP texture is generated at the end of the frame (because it’s the output of the 2nd camera), whereas it’s used earlier in the frame (in the post-process applied after the first camera has been processed).

I’ve changed the PG to get around it:

  • the noPP texture is generated earlier, when scene.customRenderTargets are processed (so, even before the cameras are processed).
  • meshes rendered into noPP are collected in meshesForRTT
  • the noPP camera is now only used to generate the depth map of noPP => the list of meshes generated in the depth map is also set to meshesForRTT
  • there’s no need for layerMask anymore

PG: https://playground.babylonjs.com/#R1FV7I#17

1 Like