I will spend some more time seeing if we are leaking textures.
PR is here:
Well that was fast!
To sum up after tests and experiments:
Safari garbage collection doesn’t free resources fast enough (see issues with fast changing canvas)
At the same time, you are reaching limits on memory footprint. Just close enough to the limit that resizing canvas leads to a lost context.
I can think of 2 potential solutions:
- when a resize occurs, dispose the engine and create a new one. I’m not very confident with this one as I think the GC will not kick fast enough
- Or do not resize and change the css to display properly based on orientation/size of the page. I think this is the most reasonable solution.
As Cedric said and in the mean time I will ping the Safari folks to see if they have any tricks up their sleeves: https://twitter.com/sebavanjs/status/1319313928455921664
Also to reduce a bit your overall memory you could instead of using image processing post pro + FXAA rely on a full screen plane on the background to apply the vignette
Question about refraction textures: If two meshes are set to refract each other, they are not in some recursive loop, correct? I assume that each mesh does a render pass with no post-process and uses that as input to the refraction shader? We’re working on seeing if we are leaking textures anywhere. One other thing I’m noticing is that on certain configurations, parts were set up to refract themselves so we’re addressing that to see if it was causing the problem.
About the refraction one, they actually might but I think I never tested it as you would need 2 real time probes which can be expensive if there are any animations. Without animations it might be easier
About the shader, you assume completely well, but I guess they will try to take an updated version of the texture which might create the loop.
The leak is probably not on your side as you can see here: 218100 – Rapidly resizing a WebGL-rendered canvas leaks memory
Well, we think we fixed some issues on our side that have at least stopped Safari from crashing. It may be that our application was just exposing these Safari issues in a similar way to your contrived example so it was a combination of Safari issues and our issues in how we were handling textures and certain updates. We made the following updates:
- Filtering out parts that would refract themselves
- Correctly disposing the refraction textures.
- Preventing the DynamicTexture from updating when content doesn’t change
- Preventing engine resize when width and height were not greater than zero.
This last one was interesting because I saw that on the page scroll, the window resize handler was firing but I suspect with a width/height of zero. I am wondering if trying to divide 0/0.5 (width/pixelDensity) was breaking something in Safari? Anyway, by validating those values before allowing the resize to be called, we can no longer reproduce the issues.
This link has Babylon 4.2.0-beta.16 and those fixes:
So Glad you fixed it, but I ll still keep an eye out on the Safari issue to let you know !!!
I hate to jump in on a 49 post topic, but that topic about ktx textures consuming a lot of Cpu memory that just started, might quantitatively be of interest here.
Looks like the resize issue might get fixed soon in Safari !!!
Just a quick update here. We are able to improve our performance quite a bit with these optimizations, but there are still performance differences between 4.1 and 4.2. We’re trying to determine the cause and will report back.
Please do as we do not want a slower 4.2
I’m not sure if this is a clue, but we’ve narrowed the issue on Chrome Android down to canvas resizing. We added some code to turn off the render loop based on the app state (when 3D is loading/idle but not shown) and performance is drastically better. Was there a change to any code that dealt with pausing the render loop?
hmmmmm interesting, I do not think so, I ll have a quick look ASAP. In the mean time, if it is related to this, you could handle the runRenderLoop, stopRenderLoop in your code at the moment ???
Yes, we think this is a good optimization regardless so we’ve implemented this for our next release.