How big can a babylonjs scene be in meters and Spherical Polar Coordinates on scene

I want to make a Babylonjs scene with virtual objects positioned on this sceneby their Earth geolocation coordinates.

Firstly, I wonder would that be a performance problem or not? Or Babylonjs optimizes this somehow. There won’t be so many complicated objects on the scene, just some many 2d planes usually.

Secondly, can Babylonjs scene set up with spherical polar coordinates too, as Earth geolocation coordinates are ie. x, y, r being altitudinal angle, azimuthal angle and radial distance respectively? r is usually omitted since coordinates of points Earths surface is usually in consideration.

One reason to do with same coordinate system and big is I do not have to transform spherical geolocation coordinates to planar, cartesian, scene coordinates.

Yes it can handle it, the question is can you code it?

If you know how to work in that coordinate space then you should be good to go. Just learn how to display models at the coordinates you tell it.

Really? A scene as big as the Earth?
Also does it support spherical coords with a point descriptor/contructor as it does cartesians with Vector3 afaik?

Size is relative to your position. This PG has a camera as close as you can get to a sphere with an earth map Babylon.js Playground. Same from outer space

If you are talking about an earth that you can stand on and walk around as if you were there then that is a different ball game.

We need to know more about your use case.

Not directly you would need a function to change spherical coords to cartesian coordinates but that’s not difficult.

1 Like

@JohnK I do not understand what does that mean: “Size is relative to your position.” The first pg has a sphere of diameter 2 and texture on it, and camera and light. Am I missing some point in that?

I think what I mean is like that more where you said: “If you are talking about an earth that you can stand on and walk around as if you were there then that is a different ball game.”

If a spherical coordinats support does not exist then maybe it is not advantegous much for me to use a spherical scene and a big one. Maybe I need to use the normal babylonjs scene with its own supported coord system (cartesian according to what I have understood from you reply) but convert my objects coords to that coords system of scene. An important requirement is I need to preserve the real distance.

My use case is an app that shows you virtual objects with coordinates (coming from server) on correct coordinates on Earth, and do the same for the device using the application so you can use a babylonjs camera in the app at your correct location to look at those things around you.

It seems that you want to write an AR (augmented reality) app in which case you do not need to store the whole earth, just a database of objects related to location. Is the following the basis of what you want to do?

  1. Device reads your geo location,
  2. From a database you obtain the list of virtual objects from that location
  3. Display the objects with Babylon.js

Both spheres have a diameter of 2, their screen size is relative to the camera position.

Honestly I do not need any models like that at all. I will only show some models generally 2d to represent some real world locations and metadata like for malls lets say.

Ehat I need is a seamless alignment of real world and virtual scene on each other. So coords are important and spherical polar. Later I can even optimize to pawn only models close by. But whata about orivinal concern of having spherical coordinates to avoid transformation to cartesian and other few questions of bigness in that case.

Yes @JohnK. That is what I want.

By the way if there is no direct spherical coords support then I can easily convert geolocation coords to cartesian coords by first converting the angles into radians then finally employing basic trigonometry to convert the polar coords (r, theta, phi) to cartesian (x, y, z). Then I need to put my location too. I hope it will work. That way somebody need to occlude the virtual objects on the dark side of the moon or simply do not pawn them and some at all, no problem. That is necessary and can be easily handled.

Alternate could be laying all on a flat surface by putting the arc length into account while calculating the coords of points on this flat surface.

You do not need spherical coordinates. It is not possible or necessary to render the whole earth as a model in a scene when all you want the scene to render is models/data of your current location on the earth.

You have the geo coords, as I said before, all you need a mapping between geo coords and the models/data from those geo coords. At any location you can consider the earth as flat with a direct relationship between geo coords and Cartesian coords.

So at geo coords (x, y) you display models which map within ( x +/- maxx, y +/- maxy) for some maxx, maxy

As you move within the scene to ( x + dx, y + dy) then you display data which maps within ( x +/- maxdx, y +/- maxdy) for some maxdx, maxdy

Rendering whole earth as a model is not my interest at all.

I get your other point since the Eath is big enough there is even no need to conversion since arc length between two points and line between the same two points are approximate enough.

1 Like

Basically you say what your coordinate space is. Everything in the scene is relative, if you know how to plot points in any of those systems you just treat BJS as a XYZ left handed system and convert the space as needed… I’m not to sure where to confusion is.

If you say 1 unit is 1 meter then its 1 meter, if you say its 1 mile its 1 mile. If you got your XYZ value from a polar conversion then so be it, if you did a planar projection then so be it. You are in control.

If its more a question of performance as well then yes BJS can handle it, if they are simple or complex meshes it will be fine as long as you stay within standards compliance specs and don’t do something crazy like a bajillion vertices. There are Frustum culling methods, LOD, instancing, Solid particle systems etc that can be used for performance.

I see. My confusion came from that I need to move the user, by moving user’s camera, in the scene when user really moves. So I was thinking I could not make relative measurement to keep things distance. I was just worried about overloading the system.

I think I get your point.

When you start moving your user do you intend for it to be on a path or will they have control?

you could also leverage a arc camera maybe if you are going to be focusing on a central point?

User will e in control since they are moving in real world, movingvtheir device seeng virtula objects geolocated in real geolocations in the scene. Eg. They will arrive in font of a virtual object in the scene only if they arrive it in real world.
They can also have a look to different directions by device orientation change.

So there are already then API’s for that. Id do some googling, cause you will basically generate that data one way and then use BJS to display information in accordance to what data you feed it.

Im pretty sure google APIs are going to be your hot ticket.

1 Like