Crossplatform GUI (VR, Desktop, Mobile)

Are there any best practices for making a crossplatform GUI?

A while back we used the AdvancedDynamicTexture.CreateForMesh a lot, but the performance was terrible and would crash on iPhone for making too many canvases.

I would like to instead use FullscreenUI. From what I can tell it doesn’t make a ton of canvases, but it doesn’t work for VR.

So is there any way to get FullscreenUI working in VR, or optimization tips to make AdvancedDynamicTexture less terrible?

1 Like

cc @RaananW

Fullscreen in UI is always an interesting issue. Think about the user experience - how can you click a fullscreen UI if you are immersed in the scene? Fullscreen UI can be used for HUD-style information panel, but if you want a unified UI it doesn’t make a lot of sense, since you cannot interact with it.

You can always make a single UI for desktop and mobile experiences, and when in VR use a single mesh close to the camera to project the same UI (the create function is the same, the only difference is what function is used to generate the dynamic texture)

So I should have clarified that the UI I’m talking about doesn’t actually have any buttons. It’s just visual affordances, primarily text overlays on meshes (labels), and the clicking is done by selecting the mesh itself.

Can you explain more about how you would project the desktop UI to a mesh in VR mode? This sounds like it might be something I’d want to try.

the UI is being added to a root node of an AdvancedDynamicTexture. If you create the UI using a function that takes this root node (or the ADT directly), you can call this function on two different ADTs - one is a full screen ADT for desktops, and the other is a mesh-based ADT for VR.

Another approach you can take, especially if your UI is just information and not buttons or interactable elements, is create one UI on a mesh, but make the mesh “fullscreen”. This is a bit of a hack, but this playground might explain it better:

Babylon.js Playground (

I am creating a ground element, and am stretching it across the viewport. in XR is will also be stretched correctly, making it seem like it is a full screen UI, as the mesh is just 0.1 units away from the headset.
But i would recommend the first approach more than the second, just because I find the second to be a bit of a hack. Performance-wise the 2nd is as good as the first. there are a few extra calculations done to check the size of the ground, but that can be optimized.

1 Like