I was playing around with this downscaling chain method, by sampling 13 points from the higher texture, at each reduction step in the chain.
And I’m not sure if it’s correct:
https://playground.babylonjs.com/#JZYCBP
Question:
-
When creating a PostProcess at a lower size/ratio, will the default textureSampler be the higher resolution texture, or the lower re-sized texture of the PostProcess?
-
In my case, I need the higher resolution to sample it, and to do that I also need the pixelSize so that I can offset accordingly. Am I calculating it correctly in my code?
Can someone enlighten me if this is correct way to do it?
Is there a more performant way to achieve this chain of downscaling?
When a post-process runs, its shader:
- executes with textureSampler = texture of the post-process
- renders to the texture of the next post-process in the chain
For the first post-process, the scene will have been first rendered to its texture before it executes, and the last post-process renders to the screen.
So, your code is correct, except for the * 2.0 part in the shader. You don’t need this multiplication because the texture used by the shader is the post-process texture, and you pass its dimensions through the texSize uniform.
You can save some performance by applying the downscaling right from the first post-process instead of doing a copy.
For better performances, you should downscale first in the horizontal direction, then in the vertical direction. That works because blurring is separable, and you will do less texture reads by proceeding like this. You can use the existing BlurPostProcess post-process to simplify your code:
1 Like
@Evgeni_Popov
Thank for the clarification. But I still don’t understand what is the default textureSampler and texelSize (resolution, size, dimensions) in a rescaled PostProcess.
Is it referencing the current PostProcess dimensions, or the previous higher resolution texture? Like this?
Scene (1080p) → PostProcess 1 (540p) → PostProcess 2 (270p) → PostProcess 3 (135p)
The chain:
PostProcess 1 shader (created at 0.5 / 540p)
textureSampler = 1080p (from the previous, Scene)
texelSize = 1 / 1080
Output = 540p
PostProcess 2 shader (created at 0.25 / 270p)
textureSampler = 540p (from the previous, PostProcess 1)
texelSize = 1 / 540
Output = 270p
PostProcess 3 shader (created at 0.125 / 135p)
textureSampler = 270p (from the previous, PostProcess 2)
texelSize = 1 / 270
Output = 135p
Is default texelSize, in postProcess.ts , taking the “current” PostProcess size, which is not in sync with the textureSampler (assuming that textureSampler is at a resolution from the “previous” PostProcess)
https://github.com/BabylonJS/Babylon.js/blob/master/packages/dev/core/src/PostProcesses/postProcess.ts
this._texelSize.copyFromFloats(1.0 / this.width, 1.0 / this.height);
Am I understanding this wrong?
Again, all my questions are specific to rescaled PostProcesses.
I’m not sure what you call “rescaled post-process”.
If I take the post-processes you create in your PG:
const firstPP = new BABYLON.PostProcess("firstPP", "DoNothing", ["texSize"], null, 1.0, camera, filter);
const secondPP = new BABYLON.PostProcess("secondPP", "Downscale", ["texSize"], null, 0.5, camera, filter);
resolution)
const thirdPP = new BABYLON.PostProcess("thirdPP", "Downscale", ["texSize"], null, 0.25, camera, filter);
const forthPP = new BABYLON.PostProcess("forthPP", "Downscale", ["texSize"], null, 0.125, camera, filter);
- firstPP is created with a ratio of 1, meaning texture_firstPP is full size (1080p). The scene is rendered into that texture
- secondPP is created with a ratio of 0.5, meaning texture_secondPP is half size (540p).
- firstPP applies its shader (“DoNothing”) to its texture (textureSampler = texture_firstPP) and generates its output to texture_secondPP. So, at this point, you have the original scene copied to texture_secondPP
- thirdPP is created with a ratio of 0.25, meaning texture_thirdPP is 1/4 size (270p).
- secondPP applies its shader (“Downscale”) to its texture (textureSampler = texture_secondPP) and generates its output to texture_thirdPP.
- forthPP is created with a ratio of 0.125, meaning texture_forthPP is 1/8 size (135p).
- thirdPP applies its shader (“Downscale”) to its texture (textureSampler = texture_thirdPP) and generates its output to texture_forthPP.
- forthPP applies its shader (“Downscale”) to its texture (textureSampler = texture_forthPP) and generates its output to the screen.
Let me know if it’s clearer this way.
@Evgeni_Popov
Sorry but I am even more confused.
I’m trying to better understand your explanation with chatgpt and I think I’m starting to get it.
Can you please tell me if every line of this chain logic is true ?
firstPP (ratio = 1.0)
-
Texture size: full resolution [1920×1080]
-
Shader: “DoNothing”
-
Input texture: texture_firstPP [1920×1080] (the scene)
-
Output texture: texture_secondPP [960×540] (defined when creating secondPP at 0.5)
Meaning:
-
Shader executes once per output pixel → [960×540 = 518,400 execution]
-
This will be downscaled only by Babylon, depending on the filter option. NEAR/BILINEAR
secondPP (ratio = 0.5)
-
Texture size: half resolution [960×540]
-
Shader: “Downscale”
-
Input texture: texture_secondPP [960×540] (from firstPP)
-
Output texture: texture_thirdPP [480×270] (defined when creating thirdPP at 0.25)
Meaning:
-
Shader executes once per output pixel → [480×270 = 129,600 executions]
-
Each execution samples from the previous pass texture [960×540]
-
Downscales half-res using the 13 samples → quarter-res
thirdPP (ratio = 0.25)
-
Texture size: quarter resolution [480×270]
-
Shader: “Downscale”
-
Input texture: texture_thirdPP [480×270] (from secondPP)
-
Output texture: texture_forthPP [240×135] (defined when creating fourthPP at 0.125)
Meaning:
-
Shader executes once per output pixel → [240×135 = 32,400 executions]
-
Samples from the previous pass texture [480×270]
-
Downscales quarter-res using the 13 samples → eighth-res
forthPP (ratio = 0.125)
-
Texture size: 1/8 resolution [240×135]
-
Shader: “Downscale”
-
Input texture: texture_forthPP [240×135] (from thirdPP)
-
Output: screen [1920×1080] (final output scaled up)
Meaning:
-
Shader executes once per output pixel of the final target, which may be upscaled to screen [1920×1080]
-
Samples from the previous pass texture [240×135]
If all of this is true, it means that my playground is not achieving the strategy.
What I should do instead:
.
. (run chain)
.
thirdPP (480×270) shader executes 240×135 times: when it does, it looks at the current textureSampler (which is 480×270) and before sampling the 13 points, the technique offsets by half a pixel in order to trigger the Bilinear Fetching. Then weight all the points and calculate the final color. Which will be printed on the fourthPP texture at (240×135).
.
. (continue chain)
.
Is this the correct logic?
Yes, the logic looks good to me (I think that when you say for the second PP Input texture: texture_secondPP [960×540] (from firstPP), you mean Input texture: texture_secondPP [960×540] (rendered into by firstPP)? Same thing for the others PP).
If you want to trigger bilinear filtering, you should indeed offset by half a pixel, so do something like vec2 pixel = 1.5 / texSize;
1 Like