Why there are two image processing implementations

I read BJS source code, find there are two different image processing implementations.

One is applying image processing to all scene, using imageProcessing.fragment.fx,
another is applying image processing to specified material using applyImageProcessing function in xxx.fragment.fx.

So, I want to know why there are two implementations, and in which situation, we need the second implementation.

file: default.fragment.fx

file: water.fragment.fx

I might have misunderstood the question, but - those are two different materials, using two different shaders. Both support image processing, if enabled. So these are two different scenarios that are not relate to one another.

EDIT - I probably did misunderstand the question :-). See answer below

About this, @sebavan correct me if I’m wrong, but I believe the material image processing exists to avoid having to use post processes, which would require an additional texture read/write.

Yup exactly that :slight_smile: In some cases, for only simple color adjustment it is way more efficient to do directly in the material.

For other cases where the full texture is already required unprocessed in linear space, a full screen pass will be preferred.

They both use the exact same code.

1 Like