Is there an easy way to ensure a model's size and position is displayed consistently across several device form-factors?

Hi there!

I’m working on a project where my team has built a React app that wraps and interacts with a Babylon instance, which is built and maintained by a separate team. We also maintain another legacy version of this React application. Most of our model and camera set up was done with this legacy platform in mind.

The new UI that we’re building consists of some overlay buttons, and what we call a tray.
The tray is absolutely positioned at the bottom of the screen and has several fixed heights that correspond to different states of the tray.

In a perfect world, when we change the state (and the height) of our tray, we would be able to resize the canvas that contains the Babylon instance to fill the remaining portion of the window. Unfortunately, there is an iOS issue that prevents us from being able to resize the canvas in a smooth way without causing the browser to crash.

We’ve implemented a workaround that consists of:
1. Having the canvas always fill the full window height and width.
2. From within our React app, we pre-process some camera objects that are fetched from our back-end services. We then pass these camera objects to Babylon and use them to display different angles of the 3D model. This process involves applying multipliers to some attributes of the camera and changing the positions.

While this workaround was a suitable solution while we were implementing our MVP, we are now reaching a point where we’d like to scale up and release more products on our new platform. This means we need a more scalable, less hard-coded solution to positioning these models consistently within a variety of device viewports.

We have a couple of proposed solutions we think may be able to solve this issue, but wanted to check and see if there was something implemented as a part of Babylon that would be able to help us with this issue.

Currently, we’re thinking we could define a fixed subspace of the canvas. The pre-defined camera angles would then have to be defined such that our model is positioned within this subspace. We believe this would help to standardize the camera positions and allow us to potentially normalize the changes we need to make to the camera to position the model within the scene. We hope this would allow us to come up with a generic solution.

Another option is to, instead of using hard-coded multipliers, pull some information about the scene from the Babylon instance and calculate the ideal position of the camera given that information. This is essentially just a generic camera pre-processing step.

If there exists a feature in Babylon that could help us with this problem without the need for us to develop an algorithm that normalizes the models’ position within the scene, please let us know!

Pythagoras to the rescue! Framing scripts are easy, you figure out what the biggest length between all the bounding box axes, then you calculate your frustum size and step the camera back from the center of the mesh by the amount that the calculation tells you. Ive got a ton of examples of this let me see if I can find it for you.

1 Like

Hi Pryme8!

Thanks for the quick response!

That was one of our proposed solutions, here’s some scratch math to visualize our thoughts.

Would you say this sort of thing would be the best solution?

1 Like

yup that’s the exact solution!

1 Like

Hi again!

We’ve determined that updating the z position of the camera would be a suitable solution for standardizing the size of the shoe within the viewport, but unfortunately, that didn’t resolve the issue of our usable space being decreased and increased with the tray height.

To resolve that @Alex_B suggested that we manipulate the camera’s viewport y and height values to accommodate the changes in usable space that result from changes in our Tray’s height.

This solution works perfectly as expected, and moves the viewable window up and down as necessary. But there is an issue when we tie this implementation in with Refraction Textures.

When we load the scene and make no modifications to the camera’s viewport, we’re able to see our Refraction Textures as expected.

When we manipulate the viewport through the scene object, we start to have issues with our meshes that use the Refraction Textures.

Both of these situations are pictured below.

This happens with all of our products that make use of the Refraction Textures.

Alex may be able to provide some more context about what he thinks the issue is, as well.


I think @gtimm has provided all the relevant info here. My suspicion is that the refraction texture’s render pass is not respecting the camera viewport multipliers? Maybe a bug on this edge-case setup?

1 Like

Are you generating the refraction texture by code (probe for eg)? The viewport is something applied automatically by the gl layer at the very end, I don’t see how it could make some differences here…

Maybe if you can provide a repro either in the Playground or in a web site we could have a deeper look.

Here’s a playground that demonstrates the issue. Notice that as the viewport grows larger and smaller, you can see the refraction moving as well. I’d expect that to not change since the camera and objects are not moving.

I don’t think it’s the right PG, the viewport is not changing?

1 Like

Oops, I forgot to hit save!

1 Like

By default the viewport of the camera is used to render in render target textures. To disable this, you have to set ignoreCameraViewport to true:


I have had to do this for like 20 projects at this point… I have some fancier methods for it but this is the simple nitty gritty. Might have to hit play if the TS scene hangs on first load.

That will frame whatever mesh though as long as its not some stupidly absurd bounding box ratio.

1 Like

Hi. I am just throwing one solution that I usually use. I had several project where I had to deal with huge amount of different object fetched from storage, where I don’t know what’s the model, which format is model etc.

This solution doesn’t require updating the position of the camera, as I usually find that not consistent for each possible model.

Basically, what I do is I create one box using Babylon, and I set the size of that box to be whichever is the biggest dimension of the mesh inside the scene. So, let’s say that your model in the scene is 2x3x4. The size of the box will be 4.

I position this box on top of the model, set this box to be parent for each mesh in the scene, then position box in (0, 0, 0) and call this


Which sets the size of the cube to 1x1x1 (so basically, whole scene becomes 1x1x1 world where the parent box acts as world).

So when you do this, you have a parent box in 0,0,0, you can set the target at 0,0,0 (or alternatively parent box bounding box center). This way, as the world is always 1x1x1 and target is looking at the center of this parent box (which is isVisible = false, by the way :D), so no matter which size of the model comes in, it will always look the same, and no need to do anything with camera.

Few notes though. I would advise that you follow the rule that size of the box should be the biggest dimension value from W x H x D values of the mesh loaded. So basically you need to calculate the bounding box of the mesh to get these W H D values, and then use a biggest value as cube size. This is really simple if you have ONE mesh, meaning, if the whole model is containing only one mesh/part.

The problem comes if the model is assembled from multiple meshes/parts. In this case, you cannot simply take one of them and work with the size. You need to calculate combined bounding box values for those parts.

Then you can select any mesh from the mesh array, let’s say mesh[0]. And set this newly calculated bounding box to it.

To do that I use this method stated in this playground called totalBoundingInfo() (lines 30-47) (note that this playground is only to show you a method, it’s not working example of implementation I am talking about).

After you calculate totalBoundingInfo you can set that bounding info the any mesh from the meshes array (like scene.meshes). For example:

mesh[0].setBoundingInfo(totalBoundingInfo(meshes)) (where meshes is array of meshes, like scene.meshes for example).

In this moment you can easily read the bounding box values from the model, get biggest dimension, set it as a parent box size, parent everything, normalize the cube, and probably call


to be sure that future calculations will work with the new values.

One more thing though. I am not sure what’s the nature of your project. But all of this affects the scale of the mesh or meshes in the scene. So if you are using scaling values for something, you need to make sure that you are offsetting everything properly. For example:

for 2x3x4 model in the scene, if you scale it to 1x1x1 cube where 4 is the size of that cube, you will get 0.5 x 0.75 x 1 (as everything scaled down 4x, that’s why it is important to match cube size with the biggest dimension). So if you need to use the scaling values, or initial “real world dimension values” of the model, you need to use that scaling offset factor to multiply with the values to get initial values.

I wrote a lot of stuff here, maybe it sounds complicated, but if you decide to give it a go, it’s quite simple to implement, and I can create a simple playground where you could see this in practice if you want.



Here is the working example of what I am talking about. I hope it makes sense what is going on… (lines 20-86 + totalBoundingBox method in the end)

You can try to change the diameter of the spheres (lines 21-23) or/and position of the spehres (lines 25-26), and everything should follow, you should be able to have camera always positioned at the center of the model without need to change radius of the camera as the scene is always 1x1x1 no matter which model you load.


Our sizing issues should be resolved with everybody’s proposed solutions, and our viewport issues were resolved by Evgeni_Popov!

Thanks, everybody for the great suggestions, and thanks to @Evgeni_Popov for clearing our blocker!

fyi, you dont need to calculate all the conclusions on each iteration of the loop.

can move this code:

let md0 = Math.pow((Math.pow((maxX - minX), 2) + Math.pow((maxY - minY), 2)), 0.5)
        let md1 = Math.pow((Math.pow((maxX - minX), 2) + Math.pow((maxZ - minZ), 2)), 0.5)
        let md2 = Math.pow((Math.pow((maxY - minY), 2) + Math.pow((maxZ - minZ), 2)), 0.5)
        maxDistance = Math.max(maxDistance, Math.max(md0, Math.max(md1, md2)))

outside of the for loop