What is the correct way to render pipeline postprocesses to their own render targets?

Ok, so uneless I am mistaken, a PostProcessRenderPipeline chains PostProcesses so that their output is render to the next postprocess render target, instead of its “own” render target. What is the correct way to avoid this behavior?

Lets say my use case is to chain couple of post processes passes like this:

  1. PostProcess FXAA on 100% resolution FLOAT render target
  2. PostProcess blur on 50% resolution FLOAT render target
  3. PostProcess blur on 25% resolution FLOAT render target
  4. PostProcess blur on 12.5% resolution FLOAT render target, heaviest blur
  5. Finally combine and tonemap to screen at 100% resolution UNSIGNED_BYTE render target

Now… Unless I have misderstood something, what actually happens here is:

  1. Scene is rendered to a completely new render target initially
  2. FXAA is rendered to the 50% target ← Even though intention is to preserve resolution
  3. First and second blur would be rendered to 25% and 12.5% targets ← Again, not quite matching the intented sizes
  4. Final blur actually renders to 100% render target ← So to a 8*8 = 64 times more fragment shader pixels than intended. Also the render target is UNSIGNED_BYTE format so color information for over 1.0 values is lost
  5. Finally the combine pass renders to screen as expected.

So my question is, what is the “correct” way to do my render pipeline? If I want to specify the sizes AND types of the render targets specifically per each post process step, to use the values that I initialize the PostProcess() classes with?

Here’s what happens when you have 3 post processes P1, P2, P3, with their render targets being RT1, RT2, RT3:

  • the scene is rendered into the render target of P1 (RT1)
  • the shader of P1 is applied with RT1 as the input texture and RT2 as the output texture
  • the shader of P2 is applied with RT2 as the input texture and RT3 as the output texture
  • the shader of P3 is applied with RT3 as the input texture and the default frame buffer as the output texture

Basically, a post process renders into the render target of the next post process (except for the last one which renders to the default frame buffer, as there is no next post process in this case).

So, in your example, FXAA will be run with the scene as the input texture and the render target of the next post process as the output texture (the one from the blur 50%). If you want the FXAA post process to render into a full sized render target, you should set the width/height of the blur post process to be full size. The 50% reduction should be done on the 3rd post process.

2 Likes

Thanks, I guess it has to be done like that.

I now added getPreviousSize() and getPreviousType() helper functions to bit easier pass “previous” PostProcess size and type to the next one, while keeping the values at the PostProcess constructors themselves:

const fxaa = new FxaaPostProcess(‘fxaa’, getPreviousSize(1.0), … getPreviousType(Constants.TEXTURETYPE_HALF_FLOAT));

const blurX = new BlurPostProcess(‘hBlur1’, getPreviousSize(blurSize), … getPreviousType(Constants.TEXTURETYPE_HALF_FLOAT));

Etc.

That seems to be working. It is a bit convoluted and would break if PostProcesses aren’t pushed to pipeline this._effects in the same order they are initialized. But at least it seems to work now. :slight_smile:

1 Like