One of its functionalities is the ability to show in Augmented Reality the configurations the user (client) makes. For that, I generate a GLB with BABYLON.GLTF2Export, that I pass to the Google’s model-viewer web component via blob.
Please note that, at that very moment, both the BJS WebGL canvas and the model-viewer (also WebGL) are running, so I presume the available graphical resources, and the device capabilities in general are being really stressed. This is even more true considering that AR experience only have sense in a mobile device, which is not in general as powered as a computer.
This said, I can think about three strategies here to try to enhance the performance:
(The one I’m implementing now) To fully hide the BJS canvas with an (opaque) overlay, that is used as the container for the model-viewer component, indeed. From here It is not very clear to me if such an offscreen setup disables or reduces the computation needs from BJS:
An element is being rendered if it is in a Document, either its parent node is itself being rendered or it is the Document node, and it is not explicitly excluded from the rendering using either:
the CSS ‘display’ property’s ‘none’ value, or
the ‘visibility’ property’s ‘collapse’ value unless it is being treated as equivalent to the ‘hidden’ value, or
some equivalent in other styling languages.
Just being off-screen does not mean the element is not being rendered. The presence of the hidden attribute normally means the element is not being rendered, though this might be overridden by the style sheets.
To call engine.stopRenderLoop when the model-viewer component is shown, and later, when the user exists the AR, restart with engine.runRenderLoop
Launch the model-viewer in a different web page, passing the GLB blob embedded in the URL. I don’t know if that is even possible or if having two pages means two separated thread of execution, and so a better use of resources.
So that is the scenario. What do you think about it? Any advice?
I would do this method, it also really would depend on the video memory of the client. If the original scene is asset heavy with textures or something pausing the render wont necessarily free up resources that are cached for the original scene. I would have to see your setup. Are you able to serialize the original scenes state and dispose it in its entirety when you go to the VR version, then re-initialize it when needed?
I need the model-viewer component, as It let me access to Scene Viewer (If I want, for example take photos or record video in the AR experience) and Quck Look (the USDZ is generated on the fly) functionalities.
I thought about all the aspects you’re marking out.
Of course, a 4th option we were thinking about would be your “dispose - go ar - exit ar - recreate” approach. It is possible, but I’m considering it as a last resource.
I’m interested as to why you would say you conider this as a last resport? @Pryme8 pointed out the valid memory concerns to be considered on top of your questions about performance, are you not concerned about your apps memory footprint?
I have personally taken this route with my ongoing configurator building ( except im switching from 3d to 2d )
I meant that, the needed process of reloading/recreating of all the scene would have a high UX penalty.
That’s why keeping all the resources as light as possible have been our approach until now. And, of course, memory footprint is at the end a performance subject indeed.
Anyway, we’re now in the point of rethinking the whole thing, and of course, that’s the cause we’re consulting here to the community, looking for another points of view, as yours and the ones from @Pryme8 and @jelster.