Custom postprocessing - bloom

I’m currently implementing a custom post-processing pipeline, but I’ve encountered some issues. I want to achieve a bloom effect using either dual blur or mipmap blur. Both blur effects involve downsampling and upsampling, meaning I need to create the target layer’s RTT and manually manage its rendering logic(If the level is 8, then I need 15 RTTs.). A similar implementation is mipmapBlur in pmndr’s post-processing library.I’ve looked up some information but haven’t found a similar implementation. How can I implement upsampling and downsampling by extand from PostProcessRenderEffect?

1 Like

cc @Evgeni_Popov

I think you can take our Bloom implementation as a good starting point? It already does two blur passes, so it’s a matter of adding additional passes:

Thank you for your reply. I have referenced the existing implementation of bloomEffect, and as you mentioned, I intend to use that as a starting point. The current problem I’m encountering is implementing blurPass; I don’t know how to take over its rendering logic. Some time ago, I implemented dualblurPass in Three.js. I need to implement upsampling and downsampling in the renderer function of this pass and perform the blurring operation using a specific convolution kernel, as shown in the code below. I need to use multiple rt values ​​to implement downsampling and upsampling, and the final result should be available in the shader of the next pass.
Additionally, I considered implementing a class that inherits from PostProcess and creating a rendering target (rt) using new RenderTargetTexture within that class. However, after reading the PostProcess source code, I discovered that the rt created in PostProcess is not constructed from the RenderTargetTexture class. This confuses me. Is there any way I can manually control the rendering logic within PostProcess?

Complete code:dualBlurPass

render(renderer: WebGLRenderer, inputBuffer: WebGLRenderTarget) {
    const count = this.loopCount
    let width = inputBuffer.width
    let height = inputBuffer.height

    // down sample
    for (let i = 0; i < count; i++) {
      downRt[i].setSize(width, height)
      upRt[i].setSize(width, height)
      this.downSampleMaterial.uniforms.uSize.value.set(
        1 / width,
        1 / height,
      )

      width = Math.max(width / 2, 1)
      height = Math.max(height / 2, 1)
      this.downSampleMaterial.uniforms.uFirst.value = false
      if (i === 0) {
        this.finRT.texture = inputBuffer.texture
        this.additive && (this.downSampleMaterial.uniforms.uFirst.value = true)
      }
      this.downSamplePass.render(renderer, this.finRT, downRt[i])
      this.finRT.texture = downRt[i].texture
    }
    // up sample
    upRt[count - 1].texture = downRt[count - 1].texture
    this.upSampleMaterial.uniforms.uSize.value.set(1 / upRt[count - 1].width, 1 / upRt[count - 1].height)
    for (let i = count - 2; i >= 0; i--) {
      this.upSampleMaterial.uniforms.uSize.value.set(1 / upRt[i].width, 1 / upRt[i].height)
      this.additive && (this.upSampleMaterial.uniforms.uCurDownSample.value = downRt[i].texture)
      this.upSamplePass.render(renderer, this.finRT, upRt[i])
      this.finRT.texture = upRt[i].texture
    }
  }

I think you can do something like that (it’s an example, in your case you would probably want to package it as a PostProcessRenderEffect, like BloomEffect):

https://playground.babylonjs.com/?inspectorv2=true#900H0F#2

You would have to create a custom upsampling post-process, and add it (them) to the postProcesses list.

Once you created a PostProcessRenderEffect, you don’t have to create a rendering pipeline, you can directly use it like this:

https://playground.babylonjs.com/?inspectorv2=true#XKC8TV#1

Would that work for you?

1 Like

Since I plan to add this custom bloom effect as an effect to my custom post-processing pipeline, if I use DirectRender to render the upsampled and downsampled results onto the RTT, can the order of effects in the post-processing chain be guaranteed? Assuming the first effect in my post-processing chain is vignette and the second is bloom, can I guarantee that the vignette effect has already been applied when DirectRender is called?

I used directRender for demonstration purpose, but if you write a PostProcessRenderEffect, you won’t need it.

Here’s a basic implementation of such PostProcessRenderEffect, to get you started:

https://playground.babylonjs.com/?inspectorv2=true#WDDM1I#2

The ThinCustomBloomEffect class makes it easier to add support for this effect to frame graphs, if you ever want to implement it.

1 Like

I tried writing a demo, and the effect was quite good, based on the glow techniques shared by dualBlur and cod(CALL OF DUTY). However, I encountered a problem: when I use Spector.js to check the drawing status, the viewport size is not as expected. I have checked the code, and the logic is correct. Am I overlooking something?

As shown in the image below, the viewport size of the first downsample has been halved, but the size of the first downsample I created is 1, which should be the canvas rendering size.

    const samplerSize: number[] = [];
    let size = 1;
    for (let i = 0; i < this.count; i++) {
      samplerSize[i] = size;
      size /= 2;
    }

    for (let i = 0; i < this.count; i++) {
      const downSamplePass = new BABYLON.PostProcess(
        `downsample_${i}`,
        'downsample',
        ['uFirst', 'uLuminanceThreshold', 'uBlurRange', 'uResolution'],
        ['textureSampler'],
        samplerSize[i],
        null,
        BABYLON.Texture.BILINEAR_SAMPLINGMODE,
        engine,
        undefined,
        undefined,
        BABYLON.Constants.TEXTURETYPE_HALF_FLOAT
      );

      downSamplePass.onApply = (effect) => {
        console.log('down',i,engine.getRenderWidth(),engine.getRenderHeight())
        effect.setBool('uFirst', i === 0);
        effect.setFloat('uLuminanceThreshold', this._threshold);
        effect.setFloat('uBlurRange', this._range);
        effect.setVector2('uResolution', new BABYLON.Vector2(1 / engine.getRenderWidth(), 1 / engine.getRenderHeight()));
      };

      this._effects.push(downSamplePass);
      this._downSamplePP.push(downSamplePass);
    }

    for (let i = this._count - 2; i >= 0; i--) {
      const upSamplePass = new BABYLON.PostProcess(
        `upsample_${i}`,
        'upsample',
        ['uBlurRange', 'uResolution'],
        ['textureSampler', 'uCurDownSample'],
        samplerSize[i],
        null,
        BABYLON.Texture.BILINEAR_SAMPLINGMODE,
        engine,
        undefined,
        undefined,
        BABYLON.Constants.TEXTURETYPE_HALF_FLOAT
      );

      upSamplePass.onApply = (effect) => {
        console.log('up',i,engine.getRenderWidth(),engine.getRenderHeight())
        effect.setTextureFromPostProcess('uCurDownSample', this._downSamplePP[i]);
        effect.setFloat('uBlurRange', this._range);
        effect.setVector2('uResolution', new BABYLON.Vector2(1 / engine.getRenderWidth(), 1 / engine.getRenderHeight()));
      };

      this._upSamplePP.push(upSamplePass);
      this._effects.push(upSamplePass);
    }

I added print functionality to the console, which is shown below.

PG:https://playground.babylonjs.com/#WDDM1I#11

That’s because of the way a chain of post-processes work in Babylon: texture of post-process #i is used as the render target for the post-process #i-1 and is used as textureSampler by the shader of post-process #i.

Special cases are for the first (#0) and last post-process (#n-1) :

  • First post-process: its texture is used to render the scene. So, in your PG, you can remove _sceneCopyPass and just use this._lumiancePass in effect.setTextureFromPostProcess('baseSampler', this._sceneCopyPass);
  • Last post-process: it renders to the default framebuffer.

In your case:

  • the texture of _lumiancePass is used to render the scene. As size=1, this is a full sized texture
  • the texture of _downSamplePP[0] is the texture _lumiancePass renders into. As samplerSize[0]=1, this is a full sized texture
  • the texture of _downSamplePP[1] is the texture _downSamplePP[0] renders into. As samplerSize[0]=0.5, this is a half sized texture

That’s why the first downsample post-process renders to a 0.5 sized texture. If you want it to be full sized, you should do:

  for (let i = 0; i < this.count; i++) {
      samplerSize[i] = size;
      if (i > 0) {
        size /= 2;
      }
  }

1 Like

Thank you for your answer. I found this logic in the _finalizeFrame function of the PostProcessManager class in the source code: the rendering target of the current post-processing pass is the RTT of the next pass.

Because I’m unfamiliar with Babylon’s post-processing pipeline, unlike Three.js or pmndrs post-processing libraries where each pass maintains an input buffer and an output buffer, preventing the output buffer from being overwritten in the post-processing chain, Babylon.js requires additional processing because the texture of post-processing #i is used as the rendering target of post-processing #i-1. Furthermore, a bridging buffer is needed during the downsampling to upsampling process. Does this incur additional overhead, and what was the original design intent?

I’m not sure why you need an additional buffer here? I can’t see it in your PG above.

I think the goal was to make post-processing as easy to use as possible: you just create post-processes, and they are automatically linked to each other, so you don’t need to explicitly link the input of one post-processor to the output of another.