Updating shadow generator/shadow blur for new assets/optimisation

Hi There,

First off, thank you for creating such a wonderful engine! :heart_eyes:

I am trying to update a ShadowGenerator based on new assets and/or frame rate (SceneLoader.LoadAssetContainerAsync). However I’ve run in to two challenges I can’t seem to crack or find the answer to.

Q1. When I add meshes to the generator that are already being rendered I’m getting artefacts over the surfaces. I think it is that the blurCloseExponentialShadowMap is not being applied to the new meshes, and have been able to replicate this in a simple playground: https://www.babylonjs-playground.com/#R9G1C8#4.
I have exaggerated this for the sake of demonstration by making the shadow map size very small. You can see by uncommenting line 33 that if the meshes are added to the generator at the point of creation the blur will apply as expected. However if the mesh is added on a timeout — to illustrate staggered loading or similar — then the blur is not applied. Among many other things, I have tried disposing the shadow generator and creating a new one which didn’t seem to solve my problem either.

So the question is, is there a way to update the shadowGenerator in a way that would apply the blur filter on these meshes?

Q2. I am building a simple custom scene optimiser (Working in reverse to the built in one to improve resolution etc, as well as tailoring the LODS to cater for a clients older android device… It’s amazing how much less performant a non flagship phone from two to three years ago is) . With that in mind I was wondering if there is a way to update the shadow map size? No worries if not at the moment I am simply disposing and recreating the generator, however I could avoid a lot of array iterating if this is possible.

Any help would be much appreciated, and sorry if these questions are already answered somewhere.

Thanks for your time!

P.S
Whilst I’ve got you, I’d love to get some opinions on at what point Draco compression is unfavourable in terms of file size, etc. I have lots of small assets (ranging from 2kb to 60kb with a few 200kb+ assets) and from experimenting I’m seeing a small improvement in the loading times if all the assets are not compressed.
So I’m thinking I could save 600kb by removing the decoder (If I remember correctly) however:

A. The bulk of my testing is on a local host
B. The user has to use more data (not great for mobile)
C. Some of the highpoly LODs are actually 800kb+ uncompressed and these are lazy loaded so there is a benefit here to having it for these assets as there is no impact on the initial render/loading screen time.

Sorry for the essay, and sorry if I have put this in the wrong place.

Welcome aboard!

1/ is a bug that will be corrected by:

Regarding 2/, changing the texture size leads to rebuilding a lot of the internal data of the shadow generator, so it’s somewhat the same thing than recreating it. Well, adding the possibility to change the map size is not really difficult, so here you go:

Regarding 3/ I don’t know Draco compression so I can’t help but you should create another topic if you want a better exposure, as your question could be missed by some people otherwise.

1 Like

Well that was quick! thank you for your help.

I should really get up to speed with TypeScript so I can help contribute on these things.

Also, I took a look at your Tomb Raider page, It’s brilliant! Takes me back.

2 Likes

This was pushed to the master end of yesterday. I updated today to 4.2.0-beta.9 and everything works perfectly thank you so much!

For anyone in the future:

You can now add objects whenever you like to the shadow generator and “useBlurCloseExponentialShadowMap” will apply to these new objects too.

The shadow generator also now has a setter for the mapSize so you can change this on the fly:

yourShadowGenerator = new ShadowGenerator(512, yourLight) //Resolution is 512 here
… //Some code…

yourShadowGenerator.mapSize = 2048 //Shadow map resolution is now 2048

Thanks again Popov you’re a star!

1 Like

I don’t have much data to go on, but the Draco decoder is not small and if the assets are small, it might not be worth the compression benefits. If you can use glTF, consider using meshoptimizer with KHR_mesh_quantization which should reduce the size of geometry without adding additional code/download overhead.

There is also EXT_meshopt_compression from the same author, but it’s not quite ready to use yet.

Anyways, some options to explore.

1 Like

Thank you bghgary, I wasn’t aware of these two options they sound perfect. I’ll take a look :+1:

1 Like