Two question) The first about SSR Performance and the second about Performance of meshes that close to each other


Why SSR decrease fps more than 50% when i look under my feet?
video from pg:

Why the performance is so low and how i can increase performance when my camera is near to big set of object, for example planes. I prepare a pg where you can see performance lack (For my laptop i7, MX350 2gb), with only 3000 planes.


Regarding Q1, this is because the rays shot to calculate the reflections do not intersect any geometry when looking directly down, so there’s no early stop and we perform all the steps in the ray marching loop (given by the reflectionSamples property). You can try to decrease the reflectionSamples property but this may affect the overall quality of SSR. Note that we are in the process of improving SSR (

Regarding Q2, you should have a look at the stats in the inspector:

If most of the time is spent on the GPU then you are fill rate bound…

You could lighten the fragment shader by disabling the lighting and using an emissive color instead if your design allows it.

Or decrease the number of planes because there is a lot of overdraw.

Another thing you could try is setting a material on the SPS mesh with needDepthPrePass=true to prefill the zbuffer. Then use the EQUAL depth function (instead of the regular LEQUAL=less than) when drawing the mesh. That will avoid any overdraw at the cost of the depth prepass: you will have to test it to know if it’s a win. Here’s how you can do it:

The depth pre-pass disables color writing, so testing engine.getColorWrite() in onBeforeRenderObservable / onAfterRenderObservable allows to know if we are in the pre-pass or not.


Thank you for reply!

Unfortunately you way is not for my case. In live project i use node material with alpha channel (alphaMode-blend) for this planes. And if i use depth pre pass only the first row is visible.

If look on performance gpu is bottle neck. But i don’t undertand why. I already rendered on this device BabylonJS scene with thousands of polygons and diffrent materials etc. it was Ok. Why here i got another result?

What do you think if i will not use SPS and use instances with alpha-index calculated by distance to camera it may to improve speed?

I also can use thin instances but i don’t know how i can calculate alpha indexex for them.

For me important have an opportunity to look on this planes throught on other planes

I think this is the answer:

Devices such as cell phones and laptops can be very bad when alpha blending comes into play.

Even on desktops it requires more GPU power as alpha blending is more fill rate intensive.

All these won’t help because the material is transparent, which means that all the planes will be rendered and blended with what is currently on the screen. In the worst case, if the 3000 planes are on top of each other and fill the whole screen (and your PG is close to that), then the GPU should draw screen_width * screen_height * 3000 pixels (with blending!). That’s way too much, but even with far fewer planes, the GPU can be brought to is knees if the planes are too large on the screen.

1 Like

Thank you! I thought in this case i can’t ot something else. Maybe using of single Particle system instead may help… For simple particles alpha calculates the other way as i know

I’m not sure I understand this one… But even with a particle system, if alpha blending is enabled then you will end up drawing each plane one over the other and this is what kills your performances I think.

I’m will not tired today))

Maybe there is the way to render diffrent meshes with diffrent render resolution. Sounds crazy…but…) If there is possibility like that i think it is the way to decrease GPU load on the planes.

My scene represents a clouds between wich flying a bird. Maybe i can to make a low render of clouds and also put in render target depth map. And the add a render of bird under the low-res clouds render and apply to it depth map in projection mode on a emissive channel to emulate action through the clouds

Using two cameras with different render size may easy solve my problem. But i don’t know how to do that

Maybe you could try to render only the foremost n clouds and not all of them? I think at some point stacking more clouds won’t make a difference because the result is already an opaque pixel.

This may be the best performance gain if you are able to choose a small enough value for n. You will have to sort the list of planes, but since you are not CPU bound, this should not be a problem.

1 Like

Thank you for help! I decrease num of planes like you say and got performance improvement for fev times!

Also i tried to use depth pre pass for rendering foliage with alpha clip transparent mode. And it also increase performance a lot! Look on the picture. It is render on my cpu with 55-60 fps without optimization by instances etc. Pre-pass increase performance twice in this case!