defaultPipline.imageProcessing + individual imageProcessing for Material = weird result

Hi guys :smiley:

We are trying to override/combine defaultPipleine.imageProcessing (Contrast, Exposition and Tone mapping) with individual material imageProcessingConfiguration.

We are lowering contrast on one object’s material and boosting contrast for the whole scene, but the result for that one object looks completely off.

We want to make the green ball ignore the global Contrast and Tone mapping, or at least compensate the contrast applied by the defaultPipleine.imageProcessing

In our case, we need to upload a custom texture on a plane and it’s contrast and exposure are over-exaggerated because of the global settings.

What are we doing wrong?

Hello :slight_smile:

In your scene, when you are setting up Contrast, Exposition on a Material, it works but it’s only in addition to the final postprocessing which comes after. So setting it to 0.5 (default) has basically no effect.

The key is to use another camera because the global postprocessing is camera-based.
Have a look at this new playground

Camera2 is the same, but not included in the postprocessing.
Then in order to chose which mesh goes on wich camera, it’s a matter of using the layerMask param :slight_smile:


1 Like

Thank you! It works :pray:

We tried to avoid “more cameras” due to performance reasons, but looks like we have only multi-camera option so far :slight_smile:

Cool :+1:

Understood. That said, I would say that since you are splitting the meshes of the scene between two layers, it should not impact that much perfs. What won’t be rendered by one camera is rendered by the other… It’s not like if you had doubled the number of camera in a full scene, doubling indeed the render time…


1 Like

Thank you for your help, @Tricotou !!!
We are now a bit further with the goal we’re trying to reach, but we are facing another problem that we tried to fix with layerMasks, but can’t wrap it around our heads so far.

Here’s the demo, and the demo:

For us it has only one problem: The green ball can be seen through the blue plain.
Screenshot 2024-04-25 at 12.21.16

If you comment this line 16: scene.activeCameras = [camera, camera2]; you will see that one camera has post processing and shows Red Ball and Blue plain. And the second one shows green ball.

Press “1” to enable the post processing and “2” to disable:

Can the green ball be somehow hidden behind the blue plain when we rotate the scene?
Can layerMask even help with that?
Seems like it does affect the “visible/invisible” logic between object and camera, but doesn’t affect the camera rendering order.

Any clue would be appreciated :pray:

Ah yes your problem is that camera 2 is overridding the depth buffer of camera 1, resulting in the green sphere appear on top of render, in anycase.

What you need to do is to create a shared depth buffer.
Outputing camera to a shared target :

const rtt = new BABYLON.RenderTargetTexture('renderTarget', {
    width: engine.getRenderWidth(),
    height: engine.getRenderHeight()
}, scene);
rtt.samples = 1;
camera.outputRenderTarget = rtt;

And pass it to non PostProcessed output of the noPostProcessCamera :

const pp = new BABYLON.PassPostProcess("pass", 1, noPostProcessCamera);
pp.inputTexture = rtt.renderTarget;
pp.autoClear = false;

Finally you share depth with the defaultPipeline image processing :

if (!defaultPipeline.imageProcessing.inputTexture.depthStencilTexture) {

Here is the stuff compiled in one Playground :slight_smile:



Wow! Huge thanks!
Looks promising :heart_eyes:
While we’re figuring out the above could you please check the playground you’ve sent if you have time?
It’s showing empty scene for me :frowning:

Hum :thinking: Weird. It works on my machine :

Am I the only one ?

We’ve got it working, thank you!

The solution you provided Rocks! :metal:
But as always: There’s the new “Boss” :slight_smile:
This is what we fixed:

When we resize the scene the aspect ratio breaks and the scene freezes (cannot be rotated, doesn’t update). Do you have the same symptoms?

We tried to fix it by implementing dynamic resize of camera’s RenderTargetTexture but that seems to break this part of code:

defaultPipeline.imageProcessing.onSizeChangedObservable.add(() => {
        if (!defaultPipeline.imageProcessing.inputTexture.depthStencilTexture) {

Here’s the playground:

Can’t express my appreciation to you @Tricotou for the help you have provided so far :pray: :pray:

1 Like

We found the reason, this method doesn’t fully support MSAA.

Thanks again! @Tricotou

@Tricotou is kicking ass :smiley:

1 Like

Hold my beer :arrow_right: Playground :grin:

More seriously, I was having a look at this topic where a nice solution to your problem was given, but indeed it was said that it’s not working for MSAA. And indeed, it the related Playground a resize is as well triggering a freeze.

My solution is not really neat, since it appears MSAA doesn’t support the resize, what I did is a disposal of the different element including rendering targets and defaultPipeline, and reassigning them, using the engine.onResizeObservable trigger :slight_smile:


PS : @Deltakosh : :fire: link :smile:



@Tricotou @Deltakosh
I assume you’re not surprised, but here we are again :sweat_smile:

When we turned on the bloom the perfect rendering order started failing.

To be honest, at this stage we’re a bit lost where to dig further :frowning:
Do you think it’s still possible?

@creeo are you asking for a new feature without even answering the previous answer where I provide a MSAA enabled solution to your problem ? :grinning:

Fortunately, I’m in a good mood today ^^
It appears that indeed the effect at pipeline level such as bloom, are killing the sharedDepth trick at imageProcessing level.

I managed to have it working using a Glow Layer at Post Processed camera. Not exactly the same result as bloom, but still it can do the job depending on your final scene

:arrow_right: Playground



Hi, @Tricotou
I’m sorry I was so concentrated on the progress we’re making with your help, that didn’t even react properly. Thank you again, first of all! :heart:

Your glow solution, though, rocks even more :metal:

We can’t use it as is, but it gave us a much better understanding on the cameras and rendering. My “partner in crime” is implementing the idea he got from our conversation and we will share the result as soon as he’s done, I hope you’ll like it (it’s actually very cool :sweat_smile:)

We are fighting a “trivial” problem:
We’ve got a model with multiple dynamic objects and there’s a plain where a user uploads a custom texture. Because of the post processing (color mapping, contrast, exposure) the uploaded texture looks too bright and contrast and there are also objects on top of that plain, which still have to use post processing to look good.

I’m hoping that shortly we will show you what we got, your help is priceless :pray: