Multiple SPS in the scene?

Hi guys,
I missed you, :wink: but I tried to stay connected as much as possible with you.
Hope you are all good!

I have some time to do experiments so, why not. I want to create a big scene with max 5 objects, custom objects, imported, but those objects are low poly, aprox 500 verts each.

First atempt was with instances…It worked, but after I draw all the objects even if they are instances the scene moves with delay.

The second was with SPS. Which works prety well, but here I have other problem which I’m trying to understand. Maybe I’m doing something wrong, I don’t know.
I created a SPS foreach imported object, calculate all the transforms I need and at the end I’m creathing the SPS. But if I increase the number of SPS then the scene freeze, till the SPS is done or in the worst case I get a snag error.
I found a solution if I put a delay between SPS creations, this seems to work better but still get some freeze or memory error for big numbers.
The question was. I’m doing something wrong? There is a better solution maybe?

Eg: . If you increase rows and cols to 40 and 50 it begin to freeze and with 70 as example I’m getting a webgl error.

Thanks :beers:

1 Like

Instead of creating multiple SPSes create one with multi materials

You will need to do some tidying up and optimizing of the code but something like this or with more shapes


Niceee. This really improve the scene.Thanks @JohnK.
One more thing, is there a way to know when the SPS is builded? I want to show a mesage like ‘the scene is building’ before to create it. and hide it once the scene unfreeze and the SPS is completed.

No onBuildMesh observable as far as I know. Cannot help with this one, sorry

Thanks. I’ll try

Hi. I come back with one more question.
Is there a way to avoid somehow out of memory error?
Or to find what is the limit. Thanks :wink:


I am also trying to use SPS near the RAM limit, and what I learned is that using several SPS-es solves the RAM problem, also in case you want to update particles the larger the SPS the slower the update is.

So I think it’s good to have the size of the SPS so that the updates are fast enough and you do not make too much SPS-es so it doesn’t make sense to use SPS in the first place.

Here is an example of 2*250K boxes. Each 250K boxes are in separate SPS, trying to do 500K in single SPS leads to out of memory. Babylon.js Playground

The example takes ~3.5GB of RAM on my machine and in the profiler, you can clearly see the GC happening between the two meshes being built:

1 Like

I tried out doubling the number of particles to 500K and using only one SPS and noticed no difference in the build. There is a big difference during picking. However I noted that in your PG you are only rotating the mesh and picking on just the first SPS, in which case you could make the second SPS_two immutable.

That is a bit weird, I tried several times to build a 500K mesh and all the time I get out of memory… Maybe there is some difference in the browser? Or some limitation of RAM for the process??? I am running on a 64GB RAM with >34 GB free when running the demo so I doubt I run out of hardware RAM. :smiley: It may also be a linux limitation in some way… at 4GB to cut down the tab…?

I am running with the following spec:
Ryzen 2700
Ubuntu 18.04
Chrome: Version 81.0.4044.92 (Official Build) (64-bit)

And yes the demo is focused only on building the mesh itself and the rest should be ignored ATM. Maybe it will run better with immutable SPS but this is not the use case I wanted to demonstrate.

Out of interest why the extremely large number of particles? This topic discuss the issue of a large quantity of meshes Instanced Meshes and Performance

Posts of interest in the topic may be
Second half of


@JohnK thanks again! I missed that topic.
@Mikal_Vratchanski thank you too

I ended up with the same solution which at the begining was the problem :laughing:
I’m using 3 different SPSs with a small delay between and in this way, I can create till 100x100x5 dimensions which is more than enough.

1 Like