Applying a PostProcessRenderPipeline only to a RenderTargetTexture

I have a rather custom setup where a RenderTargetTexture is used to do the bulk of the rendering, and then it later gets blitted to the canvas. (I manually call renderTarget.render() every frame.)

I want to run the DefaultRenderingPipeline on this RenderTargetTexture in order to leverage the builtin post processing effects. This is somewhat doable by creating a pipeline, and then when rendering the render target, passing true here: renderTarget.render(true /* useCameraPostProcess */)

In Spector.js I can see that the pipeline ends up running against the render target texture, which is good.

The final output isn’t getting blitted back to the canvas however, but there’s probably a way I could solve that.

The bigger issue is that the pipeline gets ran again by the scene, so the post processing ends up running twice… once for my custom RenderTargetTexture, and once again on the scene.

Is there a way I can ensure the pipeline is only applied to the RenderTargetTexture, and avoid the scene from running the post processes again?

I found a way: Wrap the manual call to render like this:

scene.postProcessEnabled = true;
renderTarget.render(true);
scene.postProcessEnabled = false;

Now the question is how I can redirect the renderTarget’s post processing to output to the canvas.

Hello! How is the RenderTargetTexture connected to the scene? Are you using it as a mesh texture? I tried what you mentioned on the second post, and the RTT does show up with post-processing: Test render post process to RTT | Babylon.js Playground (babylonjs.com)

1 Like

The RTT is the entire screen in my case. Rather than using the canvas directly, I fill this (floating point 16 HDR) RTT with a bunch of custom rendering. I’ll then tonemap this back into the actual canvas and render some more stuff after that onto the canvas directly (like Babylon UI).

Now though, rather than doing my own tonemapping, I’m using the pipeline’s tonemapping (along with all the other cool post processes it enables).

So, like your example, I also see the post process getting applied correctly to my RTT, although under the hood (in Spector.js), it looks like the final post output is actually being put into a new fullscreen RTT. That’s fine functionally- I can do a simple fullscreen blit of the final post contents from that RTT back into the canvas.

However, it’s not ideal from a perf perspective… It’s one additional fullscreen RTT (memory overhead) as well as an unnecessary fullscreen blit (runtime overhead).

A better solution would be able to have the pipeline (or the last post process in the pipeline) know to output to the canvas’s framebuffer directly, rather than the RTT it’s using now. This avoids an additional RTT as well as the need for my manual fullscreen pass to copy the RTT post contents back into the canvas.

Hmmm that’s an interesting suggestion, let me add @sebavan to this convo too (he’s OOF this week so he won’t see immediately) :thinking:

@BrianK, let s chat offline about this one and come back to the thread with the best solution :slight_smile:

I just wanted to close this out. The solution involved setting delayAllocation=true when construction the RenderTargetTexture.