Here is the demo:
When the camera is getting closed to the imported meshes that applied with the IBLShadows, the FPS would drop down significantly.
I’m using the macOs with Edge,M4 Pro Apple chips.
Is that an issue or something I need to process additionally?
The PG doesn’t work (files not found):
logger.ts:107 BJS - [11:38:30]: Babylon.js v9.0.0 - WebGL2 - Parallel shader compilation
raw.githubusercontent.com/CedricGuillemet/dump/master/CharController/levelTest.glblevelTest.glb:1 Failed to load resource: the server responded with a status of 404 ()Understand this error
babylon.js?t=1774941346050:1 Uncaught (in promise) RuntimeError: Unable to load from https://raw.githubusercontent.com/CedricGuillemet/dump/master/CharController/levelTest.glblevelTest.glb: LoadFileError: Error status: 404 - Unable to load https://raw.githubusercontent.com/CedricGuillemet/dump/master/CharController/levelTest.glblevelTest.glb
Sorry, wrong PG. Here is the right one:
I can import the assets successfully and download it in the browser. Let me know if you can load the assets in the new PG.
This is expected behavior for the IBLShadows pipeline. It runs several full-screen post-processes every frame (voxel tracing, spatial blur, temporal accumulation), so performance scales with how
much screen space your meshes cover — the closer the camera, the more pixels need expensive ray marching.
Your playground uses moderately high quality settings. Here are the most effective knobs to improve FPS:
- Reduce shadow render resolution (biggest win):
iblShadowsPipeline.shadowRenderSizeFactor = 0.5; // default 1.0, try 0.5–0.75
- Reduce screen-space shadow samples:
iblShadowsPipeline.ssShadowSampleCount = 4; // default 16
- Reduce IBL sample directions:
// In your constructor options:
sampleDirections: 1, // you currently use 3
- Lower voxel resolution (saves GPU memory):
resolutionExp: 5, // you currently use 6 (64³ → 32³)
Start with shadowRenderSizeFactor = 0.5 — it halves the resolution of all shadow passes and is often barely noticeable visually. Then adjust the others to taste.
Also note that macOS + Edge uses WebGL over Apple Silicon’s GPU, which tends to be slower for heavy screen-space effects compared to native Metal. This can amplify the cost.
1 Like
By the way, I have to say that even use the WebGPU, it does not perform better for me.
This helps a lot, but the shadows(maybe) seem lighter.
Since I also tried it on the Windows system, and there were performance issues too.
This makes me want to ask that what is the best practice of this technology if I want to integrate it into a real project?
Is this must rely on a hardware device with extremely high performance?
The shadows appearing lighter with fewer sampleDirections is expected — each sample direction traces rays from a different angle of the environment map. With fewer directions, the pipeline samples fewer incoming light directions per frame, so the resulting shadow coverage is less complete. However, the temporal accumulation (shadowRemanence) will help the shadows converge to the correct intensity over multiple frames while the camera is still. If you want to compensate for the lighter look immediately, you can increase shadowOpacity (e.g. set it to 1.0) or increase shadowRemanence closer to 1.0 so the previous frames’ contributions persist longer.
For production use, here are the best practices:
-
Start with lower settings and increase quality as needed. A good baseline is resolutionExp: 5 (32³ voxels), sampleDirections: 2, shadowRenderSizeFactor: 0.5, and ssShadowSampleCount: 8. This should run at 60 FPS on most mid-range desktop GPUs.
-
Avoid dynamic shadow-casting objects when possible. Voxelization is the most expensive step in the pipeline and must be re-run whenever shadow-casting geometry changes. Moving or animating shadow casters forces a re-voxelization every frame, which is very costly — especially on lower-end devices. For best results, use this pipeline primarily with static geometry. If you do need some dynamic objects, try to keep the number of dynamic shadow casters to a minimum.
-
Adapt settings based on the device. You can detect GPU capability at runtime (e.g. check engine.getCaps() or measure FPS after a few frames) and adjust quality accordingly — lower shadowRenderSizeFactor and sampleDirections on weaker devices, increase them on powerful ones.
-
The pipeline is designed for mid-range desktop GPUs and above. It is not meant for mobile or low-end integrated GPUs. A discrete GPU (e.g. NVIDIA GTX 1060+ or equivalent) is the typical minimum target. Apple Silicon Macs (M1–M4) can handle it but at reduced settings — their GPU architecture favors different workloads than the heavy screen-space ray marching this pipeline uses.
-
Screen-space shadows (ssShadowsEnabled) are the most expensive per-pixel. If you are tight on performance, try disabling them first (ssShadowsEnabled: false) — they add sharp contact shadows but at significant cost.
-
shadowRenderSizeFactor is the single biggest performance lever. Setting it to 0.5 reduces the pixel count of all shadow passes by 4× and is often barely noticeable visually, especially when combined with the spatial blur pass.
So no, you do not need extremely high-performance hardware — but you do need to tune the settings for your target hardware and prefer static geometry for shadow casting. The defaults are set for quality, not performance.
2 Likes
Got it, thanks for your explanation.