Yes, I encountered the same when I logged the depth map for verification.
this._depthReducer.onAfterReductionPerformed.add(async (minmax) => {
// chk depth map!
const dm = this._depthReducer._depthRenderer.getDepthMap();
const data = await dm.readPixels();
console.log(data[0]+" "+data[1]+" "+data[2]+" "+data[3]);
let min = minmax.min, max = minmax.max;
if (min >= max) {
min = 0;
max = 1;
}
if (min != this._minDistance || max != this._maxDistance) {
console.log(min+" "+max);
this.setMinMaxDistance(min, max);
}
});
Kinda off-topic but the depth map for webgpu is needlessly populating on all channels, might wanna fix that.
I have been testing out simple fixes and have a couple that works. But I feel its best to discuss+test further since there are impacts. On my local copy,
a) I fixed minMaxReducer
to its proper async/await at _readTexturePixels
(If we go to PR, it would need a chk for webgpu or webgl).
b) I added a tolerance, for max
in setMinMaxDistance
. During testing, 10% was sufficient to eliminate all flicker for mouse rotations. For mouse zoom out, I had to raise it to 40% ~ 50% tolerance. Essentially, Iâm making the last cascade slightly longer to account for the difference that a single frame can cause when pointer moves. Ideally, this value could be user set, eg, csm.autoCalcBoundsMaxRangeTol = 0.1
.
setMinMaxDistance(min, max) {
if (this._minDistance === min && this._maxDistance === max) {
return;
}
if (min > max) {
min = 0;
max = 1;
}
if (min < 0) {
min = 0;
}
// add a tolerance
max += 0.1*max;
if (max > 1) {
max = 1;
}
this._minDistance = min;
this._maxDistance = max;
this._breaksAreDirty = true;
}
What I donât like is the artificially adjusted max range can cause additional shadows where its not meant to be. There is also very minor shadow quality degradation but I donât think its a big deal. We could use a falloff tolerance instead of a constant value.
My other solution is to add in a flag that tracks the async delay from the _depthReducer
. Then _splitFrustum
artificially extends the maxDistance
by tolerance if the _depthReducer
hasnât returned with new values. And since this happens for only 1~2 frames, its not a hard tolerance shift. The cons is that its a lot more codes for what is essentially a hack and still doesnât address abrupt zoom out flickers unless tolerance is high.
I guess the point is that mitigation methods exists, its not an unfixable problem. Then again for the gpu, CSM is not my preferred choice. I think we need to start developing alternatives with shadow mapping? So users can stick with CSM in webgl and other gpu efficient methods when migrating to webgpu. The downside of it is that other methods also need await
for texture reads for the gpu which goes back to the root, bjs really needs to rework the shadow computation to account for async tex reads, cf, why I said the gpu and cpu are different beasts, not all codes are compatible as isâŚ
Thoughts?