How to use coordinates recieved from ComputerVision libraries like tensorflowjs to place 3D model on the specified coordinates?

I am using tensorflowjs for object identification. For example I recieve xy coordinates of a chair I have identified, I then want to place a 3D model, gltf model actually, on those coordinates. And if I move the chair, I want to resupply the coordinates so that the 3D model will attach itself to the chair if the chair moves. How can I do it?

Hi @Efshal and welcome to the community.

Not sure whether you problem is

changing from xy coordinates to 3D scene coordinates in which case this might help

Or if your issue is

In which case as the chair moves reposition the model based on the new recieved coordinates.

Hello @Efshal just checking in, was your question answered?

not really, using below code, i want to render a 3d model on a specified position on screen, but couldn’t translate screen coordinates to these vector3 coordinates
sceneToRender.getMeshByID(“root”).position = new BABYLON.Vector3(
positionx,
positiony,
15
);

You can use Unproject Vector3 | Babylon.js Documentation (babylonjs.com) for that :smiley:

I also recommend reading up a bit on coordinate systems to get a better understanding of how they work: LearnOpenGL - Coordinate Systems
Tutorial 9 - Coordinate Systems in OpenGL - YouTube