I’m working on a project where my team has built a React app that wraps and interacts with a Babylon instance, which is built and maintained by a separate team. We also maintain another legacy version of this React application. Most of our model and camera set up was done with this legacy platform in mind.
The new UI that we’re building consists of some overlay buttons, and what we call a tray.
The tray is absolutely positioned at the bottom of the screen and has several fixed heights that correspond to different states of the tray.
In a perfect world, when we change the state (and the height) of our tray, we would be able to resize the canvas that contains the Babylon instance to fill the remaining portion of the window. Unfortunately, there is an iOS issue that prevents us from being able to resize the canvas in a smooth way without causing the browser to crash.
We’ve implemented a workaround that consists of:
1. Having the canvas always fill the full window height and width.
2. From within our React app, we pre-process some camera objects that are fetched from our back-end services. We then pass these camera objects to Babylon and use them to display different angles of the 3D model. This process involves applying multipliers to some attributes of the camera and changing the positions.
While this workaround was a suitable solution while we were implementing our MVP, we are now reaching a point where we’d like to scale up and release more products on our new platform. This means we need a more scalable, less hard-coded solution to positioning these models consistently within a variety of device viewports.
We have a couple of proposed solutions we think may be able to solve this issue, but wanted to check and see if there was something implemented as a part of Babylon that would be able to help us with this issue.
Currently, we’re thinking we could define a fixed subspace of the canvas. The pre-defined camera angles would then have to be defined such that our model is positioned within this subspace. We believe this would help to standardize the camera positions and allow us to potentially normalize the changes we need to make to the camera to position the model within the scene. We hope this would allow us to come up with a generic solution.
Another option is to, instead of using hard-coded multipliers, pull some information about the scene from the Babylon instance and calculate the ideal position of the camera given that information. This is essentially just a generic camera pre-processing step.
If there exists a feature in Babylon that could help us with this problem without the need for us to develop an algorithm that normalizes the models’ position within the scene, please let us know!