How to render suspended object as per instructions from server

I want to create 3d scene where the objects would be floating inside empty space. The server should know the exact location coordinates for each object. And the Babylon js UI should follow those instructions to render the objects using the .glp file provided by the server along with location in the space where the object has to be rendered.
The 3 major issues I am facing are,

  1. How to allocate location coordinates on server so that newly created objects are close to the existing objects.
  2. How to allocate location coordinates on server so that the objects won’t collide with each other.
  3. I don’t think an http api call would be a very scalable option to render newer objects around the camera, which can be moved in all directions.
  4. Strategy to render newer objects efficiently bcz there could be 100s of objects. Like rendering objects in a region new to the camera and uploading the previously rendered out of view objects. Any examples to this would be appreciated.
  5. The dimensions of the objects could not be same in all cases.

The objects won’t be movable by the user.
Any help or nudge to the right direction would be appreciated. New to Babylon js so any relevant doc reference along with examples would also help.

1 Like

You could build a controller on the client that makes objects spawn and move on some commands that it receives, like

Controller.runCommand('{"name": "spawn", "id": 4, "position": [0, 0, 0]}');
Controller.runCommand('{"name": "move", "id": 4, "position": [1, 0, 0]}');

and send the commands for it via the server using websocket

how to spawn an object close to but not colliding w/ another object?

  • Make a unique command for it like “spawn_near”, do the calculations on the client using mesh position and bounding info, and send that data back to the server
  • Or keep track of every mesh position and their bounding info on the server, and just calculate it on the server

For the very basic case you can add r+R (bounding sphere radii) to any direction from the position of a mesh

but in case there are other possible collisions, just randomly generating different angles and greater distances till you find a spot works too

Strategy to render newer objects efficiently bcz there could be 100s of objects. Like rendering objects in a region new to the camera and uploading the previously rendered out of view objects. Any examples to this would be appreciated.

Are you talking about occlusion culling?

Hi Heaust-ops,

keep track of every mesh position and their bounding info on the server, and just calculate it on the server.

this matches my use case are there any npm libraries that could help me to keep track of this information and generate values for new objects to be placed?
I am assuming if I want to generate non colliding coordinates for new object. I would need to include all the position and bounding info of objects placed so far in the 3d space in my calculations.

I think Occlusion Culling partly highlights my issue. To explain my exact concern in detail, I want the engine to only draw objects that are in the viewport of the camera.

1 Like

are there any npm libraries that could help me to keep track of this information and generate values for new objects to be placed?

If you’re using js on the backend, one option could be running babylonjs on the server with the scene in sync w/ your client, that way you can get the bounding info values and whatever you need from the babylon scene

To explain my exact concern in detail, I want the engine to only draw objects that are in the viewport of the camera.

so you want frustum culling + occlusion culling? babylonjs does frustum culling out of the box, ie it doesn’t render anything outside the camera frustum.

Occlusion culling for objects inside the camera frustum that might be occluded by other object, you’ll have to setup yourself using occlusion queries

You may have a look how it is done at GitHub - jalmasi/vrspace: VRSpace: Multiuser Virtual Reality Engine

1 Like