Hello, I encountered a limitation and wondered if it is avoidable.
Situation: I have a MultiRenderTarget in which I render my scenery and pick in a later postProcess of the whole scene the depth texture of it:
const mainPostProcess = new FxaaPostProcess("fxaa", 1, camera, samples);
this.mainTarget.addPostProcess(mainPostProcess);
once the postProcess added the resulting depthTexture is fully white (tested with BW process too, they all blank it)
Is it a technical limitation? Can I avoid depth cleaning? can I hack it?
I’m working on my side on the playground version.
Enabling the depth renderer is from the scene and I’m rendering on a MultiTextureTarget specially to get the depth out of it.
the last time I tried getDepthMap the depth was incomplete and that answer:
convinced me it was a dead end.
Meanwhile I noticed that It was still rendering on the main scene buffer and I don’t know to make it render only on the target texture.
I paste some closups renders of what I have
since the effect is a mesh that activate an effect above in an other target Texture it needs to be cut out with the depth so I pick back the depth for the scenery rendering texture and compare to the zone limitation texture and exclude comparing the two depth (I also use the difference of depth to have the distance from the effect mesh to the mesh below to adjust the level of effect (you sort of see it by the gradient as a limitation on the fxaa less screen.
Meanwhile I’m a bit blocked with the playground since I don’t see my pipeline shader rendering.
I quickly reproduced a part of what I have on my end cutting the crap to make it clearer.
Since I’m blocked at that stage I didn’t advanced to the point my playground shows the real issue I want to talk about.
If you have better ideas on how I can solve that they’re welcome too.
I assume MultiTextureTarget allows to add channels to the rendering which could help quite a bit on that matter but I couldn’t get clear documentation on that.
you should not pass the camera to the FXAA post process constructor, else the post process will be applied each time the camera is used: when rendering the multi render target and when rendering the regular scene. I think you only want the post process to apply for the multi render target, so you should passed undefined for this parameter and pass the engine instead
by default, a new RTT (R) is created as the input for the first post process, and the regular scene is rendered into R. Then, only the post process is applied to your MRT (with R as the input), so there’s no depth data generated. In your case, you want this first input to be your MRT, so you should set the inputTexture property of the FXAA post process
This part is a bit confusing, are you intentionally making a difference between RTT and MRT as the only render target texture is mainTarget or do you speak about the main render target when you have no target texture (I don’t know how we call officially btw)
The way I understood my code is that I only render on my MRT then I apply the fxaa then postProcess the scene bringing back MRT postProcessed result to the classic target (or whatever its called)
So, it seems you want the FXAA to be applied as the very last step. You can do it like this:
No, I was saying MRT because in your sample you are using a MRT. The important thing in what I said is that by default a specific render texture is created as the input of the first post process and the scene is rendered into this texture instead of being rendered into the default framebuffer.
That’s what happens, except that there is always a regular rendering of the scene, either in the default framebuffer or in a separate RTT if you use post processes. You can’t get rid of this rendering for the time being, even if you don’t use it (which is what happens in your PG as you overwrite the final rendering by your zoneMaterialPass post process).
In fact, one way to avoid this rendering would be to render your MRT with a different camera and flag all the meshes to only be rendered by this camera and not by the default camera of the scene (using the layerMask property of the camera / meshes).
In fact, one way to avoid this rendering would be to render your MRT with a different camera and flag all the meshes to only be rendered by this camera and not by the default camera of the scene (using the layerMask property of the camera / meshes).
You can see that the cubes that make that side are making some glitches and I’m completely aware that it might come from rounding errors but the fxaa on the MRT did the trick of solving that and doesn’t while having it downstream.
I’m wondering if it’s solvable.
I’m also wondering if rendering everything through MRT might have drawbacks such as performances drops or other kinds.
No, I did nothing about that in the PG. The meshes are still rendered in the MRT and by the regular rendering. This PG won’t render the meshes during the regular rendering:
It’s hard to tell where those artifacts come from (some z-fighting?), a repro somewhere would help. I don’t think it’s coming from FXAA (what’s the output if you disable FXAA?).
No I actually have it without the fxaa and the fxaa fixes it while it is on the regular rendering or on the MRT as MRT own postProcess but won’t solve it with having it bundled with the other post Process like:
Obviously making some tests those are glitches from the MRT and the default frameBuffer render well.
Unfortunately creating a PG of it will be rather complicated and time consuming .
Unfortunately that one gives like two meshes of depth and the rest remains blank as if it was taken during unfinished render which get me back to the issue you created Reusing the depth buffer from the scene rendering but the fact I have at least some stuffs make me wonder if it is some sort of timing issue.
there it is, I clearly have a rendering difference in terms of quality between classic buffer and MRT, that one is MRT.
This is fxaa less.
When I said before that I had better results with fxaa is that I mistakenly got the mainbuffer texture in the pipe instead of the MRT one.
Is that difference actually known between classic buffer and MRT?
By default, antialiasing is enabled for the default framebuffer. I guess you should set the sample property of the MRT to achieve the same result (you can set it to engine.getCaps().maxSamples for eg).
adding the second line breaks render and throws warnings:
[.WebGL-00001E140020DA00] GL_INVALID_FRAMEBUFFER_OPERATION: Framebuffer is incomplete: Attachments have different sample counts.
[.WebGL-00001E140020DA00] GL_INVALID_FRAMEBUFFER_OPERATION: Draw framebuffer is incomplete
You should not call createDepthStencilTexture yourself, for a MRT the texture will be automatically created/updated by the mainTarget.samples = 2 call.
Maybe it does not work because the depth buffer can’t be resolved to a normal texture… When in MSAA mode, there’s a special operation that converts a multi-sampling color texture to a normal texture before you can use it as a regular sampled texture in a shader, but for a depth texture I’m not sure this step exists (looking at the sources, it seems it does not) / work in WebGL.