What i doubt is that why we use two textures (one for shadowSampler, another for depthSampler) for PCSS shadowing.
Since the depth data has been written into the shadowSampler, why we need another texture depthSampler? And how about using shadowSampler to calculate blockerDepth:
Thanks
GuDuJian
That’s because we need a sampler2DShadow
to sample one texture (to calculate the % in shadow at that pixel), and a regular sampler2D
to sample the other texture (to get the depth at the pixel - your code above).
WebGL does not separate texture from sampler (in which case we could have a single texture and two samplers), so we need two textures.
2 Likes
Hi @Evgeni_Popov
Thanks for your response.
Just see the codes below:
Actually, as far as i know, only gl_fragDepth has been written into the texture(shadowSampler).
And the use of sampler2DShadow is just for the hardware-based PCF.
I mean we can seem to do this step by a sampler2D (sampling from a sampler2D and do PCF manually). If so, we can sampling the same texture to get the depth of the pixel for the calculation of blockerDepth. Thus only one texture ( shadowSampler) is needed, and the depthSampler seems useless.
Just a discussion, i’m new to babylonjs and PCSS.
Thank you again.
This would require so many texture fetches to simulate the pcf fetch from the regular sampler. Basically we currently do up until 32 fetches which would need to be converted to 128 in this case and add more cost in the shader itself.
As in webgl, unlike hlsl we can not simply reuse the same texture with different sampler types, the workaround is to to bind it twice. Once for the filtering and once to get access to the depth information to compute the penumbra size.
2 Likes
Let’s hope WebGPU can come quickly and let us use a single texture and two samplers!
2 Likes