Hi @shaderbytes. It’s a great question, and one that I have spent a lot of time debating in my own mind, event to the point of prototyping our current UI using a standard html/css based react component.
The TLDR is,
“It comes down to the “feel” that we can get, when using a 3D engine, with a particular appreciation for babylon’s excellent, PBR support, as compared to anything we can do using html/css or even a 2D engine like Pixi.js”
And if you want to hear the full story, here it is.
In a very real sense this is a 3d gui.
Our text is driven by our “TextMeshString” which uses pooled and cached instances of high-poly extruded True Type font characters, that we create in Cinema 4D. All of the various borders, including the outer bezel, are 3D meshes, and everything has PBR materials. Lighting for the scene is driven by a custom HDR map. This way, we get an interface that feels distinctly not like a web page. Each surface has a specific metallic/roughness that combines with the HDR lighting to give all these little glints and highlights across the scene, giving me the look I want. Also, we do use “3D” effects all over as the user goes through the process of transferring money using Lightning Bridge. Various meshes appear and disappear with animation, and we’ve got this cool SPS based indicator that we show when we are waiting for a response from the server. A cloud of 3D ones and zeroes floats by. And then there are these subtle effects that I can do because of our 3D/PBR. When the user first connect’s his wallet, I animate a spot light so as to gradually illuminate the scene, over 500 ms. It feels nice. It makes the scene feel more alive and less static.
There is also practical reason, particular to my myself and my small team, that makes it appropriate for us to use this technique vs a more traditional html/css workflow.
We’ve been working with babylon.js for close to four years now, on several products that we have built as a company, on the way to our current “Lightning Bridge” product. These were each more obviously 3D dependent than our current product. For example, this clip of TRADE : The Game, shows our use of babylon.js to create this social multiplayer game, based on real-time competitive simulated trading of the “order book” aka “level 2” data which represents real trading activity. In this case, it is the market for Bitcoin vs Dollars on Binance.
Along the way, we have developed our own internal framework for building reactive, data-driven 3D interfaces using babylon.js,with. Combined with a tone.js based sound engine, and an xstate based logic engine, it’s become quite powerful over the years. It’s called GLSG for “Global Liquidity Scene Graph.”
Back in 2019, I posted about it here, with the idea that I might open-source it. I never did it it yet, but I’m still very open minded about doing that, if I ever find a few people willing to help me to do it.
With our other products, I’ve always designed their interfaces to be forward-compatible with VR and AR modalities. What that means mostly, is that we have designed our user interfaces so that they can be rendered with a single camera’s view. For 2D style heads-up display elements we composite into a “real” space out in front of this camera. This way, when moving to VR/AR, we can easily attach these elements to the user’s field of view, with inertial smoothing, and we get a user interface that works particularly nicely in VR/AR but that also projects nicely to a 2D screen. A while back, I realized that we can design for the screen and for VR at the same time, by regarding the screen as a portal into a 3D-space, and with a 3D UI projected “just right” so that it fills the dimensions of the screen. Then I realized, that even without considering VR/AR, that this a cool way to make a “2D” UI.
For example, one of GLSG’s features is our “SceneUI” system. Using it we can make a hierarchy of “UIControls”. Each of these is made up of one or more babylon.js based 3D elements. The Scene UI system projects these controls out in front of the first-person camera, using a docking system that allows us to snap these controls to the edges of the view frustum. With this simple docking system in place, we can start to compose scenes, and we can define regions of the screen as “stages” that express 3D activity, such as using Babylon’s animation system to create transitions based on positioning and/or scaling UIControls.
I think there is a bright future for this approach, especially as we start to see devices hit the market that will demand that interfaces are 3D aware. I also think that Babylon is positioned to be in important building block for this new era of interfaces.