Point Cloud System size attenuation?

Hi all,

I’ve been fiddling around with implementing a large instance/particle based system in which I can render around 1M instances or so. Have tried: SPS, ThinInstances, SpriteMap, ParticleSystem and PCS. The best performance I got out of the ThinInstances using cubes. I’d like to reduce the number of vertices to only strictly necessary.

With that out of the way, I noticed that the PCS keeps point size constant, no matter the camera distance from them. This makes them unusable for my scenario.

What I’d like to achieve is pretty much something like this: ThreeJS Galaxy / David B. / Observable

There, in the three.js PointsMaterial implementation, there seems to be a sizeAttenuation property that would prove incredibly useful for this scenario. (three.js docs)

Does Babylon have anything like this? Could it (easily) be achieved?

Thanks in advance :slight_smile:

We do not support this but this is really cooool. Can you share your current example in the playground so that we see how best we could add it in ?

I’ve tried replicating the example I shared earlier for threejs. Here’s the playground: Babylon.js Playground

As you zoom out you can notice the point size stays the same (by design) and the details of the spiral arms start to fade. This also doesn’t allow the camera to get too close by to the points since the points close to the camera get really small (at normal point size), but the ones on the outer edges really stand out.

I can see how to add it in the shader

Let me see if we can make this a material plugin :slight_smile:

1 Like

Could be done like this: https://playground.babylonjs.com/#UTHA7W#2 where the effect is over pronounced for demo purpose I guess :slight_smile:


Wow, this looks great! Even better than the previous example if I might say :slight_smile:

I didn’t know about Material plugins but it’s a super neat way of extending the materials.

Thanks a lot!

1 Like

This is so cool! Just confirming though that in the PG the uniform float sizeAttenuation has no effect? It’s just the boolean define in play?

ohhh yes I did not wire it, let me do that :slight_smile:

This controls the strength of the attenuation https://playground.babylonjs.com/#UTHA7W#5

1 Like

Hi @sebavan I have a follow-up related question on this.

I’ve modified the PG to get and display the depth map, but how do I get the point cloud attenuated points to show up in the depth map?

Other scene geometry is reflected in the depth map, and point clouds when using PointCloudSystem but not VertexData. I’ve been messing about in other PGs and have been able to get VertexData based points displayed in the depth map, but not these attenuated points i.e. points that all have different sizes. If you rotate the above PG scene so that there’s one or more points close to the camera you can see in the depth map that it doesn’t seem to be using the attenuated point size but rather one size for all.

Actually, strangely enough in Chrome on my Windows machine’s don’t show any point cloud in the depth map whereas in Chrome on my MacBook it shows faint, non-attenuated points in the depth map.

Setting renderingGroupId to 1 for the plane makes the system clear the depth buffer before rendering the plane, hence why you don’t see anything in the depth texture.

You can use scene.setRenderingAutoClearDepthStencil to disable this clearing, but you will have some display problems where points closer to the camera than the plane will be displayed over the plane. It’s easier for the demo to not use a different rendering group id but to disable depth testing when drawing the plane: that way, the plane will always be displayed over the points. Disabling depth writing will also avoid the plane to be drawn in the depth buffer.

However, doing so still won’t fix your problem because the default shader used by the depth renderer does not set the point size, so the PCS is not drawn in the depth texture. You can use depthRenderer.setMaterialForRendering(mesh, material) to instruct the renderer to use a specific material when rendering a mesh into the depth texture.

In the PG below I have created another plugin material much in the same way than the first plugin but that also adds the code necessary to write to the depth texture. This material is then used to draw the PCS in the depth texture thanks to depthRenderer.setMaterialForRendering. Note I have disabled registering the plugin material globally to avoid having both plugins added to the regular PCS material + the depth material.


That’s so awesome @Evgeni_Popov . Thanks for the succinct explanation & example. I doubt I’d have gotten to that point in any reasonable timeframe on my own. :pray:

1 Like

It really is nice being able ask actual wizards for help :stuck_out_tongue:

@Evgeni_Popov One other thing I’ve just noticed :innocent:

It’s a bit hard to see in the PG but the depth map only seems to be accurate for the initial screen size. If you resize the screen, the proportions of the points in the depth map remain the same.

You can tell if you start with a square aspect ratio for screen, rotate and zoom so there’s a clear and close/large point, then widen the screen. The large point on the depth map should narrow, but it remains square.

I think that this is because the points cloud system uses the pointscloud property of a material to draw the points and as designed these points will only ever be drawn as squares

I’m not sure that is the case because if I start with say a wide screen / letterbox aspect, rotate and zoom to display a large, close point, then scale the aspect ratio back down to a square, the depth texture shows just the tall, narrow point when you’d expect a square. I.e. if the screen aspect, depth map and plane the depth map are applied to are all squares, then the point depth mask should also be square, but that’s not the case if the original screen aspect is wide.

The depth renderer does not recreate the texture automatically when the screen size changes. Also, it’s easier to see what’s going on if the plane has the same aspect ratio than the screen:


Thanks @Evgeni_Popov I can see you’re recreating the depth texture on resize so I’ll do that, but I’m
curious what the added vDepthMetric is for in the depth shader? I can’t tell what impact that’s having and if I comment it out I can’t see a difference.

It’s the code that generates the right depth value in the texture. If you comment it, you will see that the depth values are wrong. If you unzoom a lot, you should see the points in the depth texture fading to red because their depths should tend toward 1, but if the code is commented they will stay black.

1 Like