How to calculate a coordinate near a mesh, but offset +1 in y, +1 in x relative to camera's space?

Here is what I’m trying to do:

I want to create a custom gizmo on a utility layer that shows a GUI texture menu overlaying slightly to the upper right of any mesh I select to highlight. The purpose of this is to show properties of the mesh I pointed at and potentially offer menu actions such as deleting the mesh or parenting it to another mesh.

So far what I have tried is creating a plane and then copying the position of the selected mesh into the plane. This works okay, but because I’m also using the gizmo manager for position/translation and bounding box, my menu is getting in the way of those controls. Also if the mesh I pointed at is far away from the camera, the GUI menu is too small to read.

So what I tried next is placing the gui menu closer to the camera so I can read the text. I subtracted the mesh position from the camera position and that gave me a direction vector. Then I normalized that vector and then calculated a new vector3 position based on adding that unit vector with some scale to the camera position and copying that vector3 to the gizmo plane.

Here is a video of the result.

Now the menu is about the right readable size any time I click on a mesh, however it’s blocking the transform and bounding box gizmo because the menu texture is still “in the center” if you project a ray from the camera to the mesh position.

What I would like is to offset the menu to the “upper right” of a mesh similar to how 2D gui’s sometimes have an ‘X’ to delete icon in the upper right. But I don’t know how to specify this coordinate because it’s not simply increasing y and x off of the mesh position, I need to somehow apply an increase in y and z in my camera space to the mesh…

Hi! I’m not sure but do you want that gui behaves like in this example? If you link your GUI-element to mesh then you can use offset for x and y.

1 Like

That is what I want, however I think linkWithMesh only works with full screen gui. I need to have this work in VR so I’m creating a texture plane from a plane mesh, so I need to know how to position the actual plane, not just the x,y position of the control.

Adding @RaananW for the VR part.

I’ve done that in a VR game before. You need to know the position of the mesh and the direction of the camera. Since you also want to position to top-right of mesh then you need the size of the mesh (bounding box). I also did a mesh.lookAt(activeCamera.position) to keep the menu facing the camera, but that can overlap mesh sometimes. Lastly, I had an onBeforeRender observable for menu to continue facing the camera while moving in VR space. I can make a PG later today, if above description isn’t enough - would help if you had a PG to start with in that case.

1 Like

I just made a playground https://playground.babylonjs.com/#JEWZLD

I realized I could achieve what I was intending for by creating a container for the plane and then just offsetting the plane relative to its container parent. That way I can just update the parent’s position to the selected mesh’s position while the texture plane remains constantly offset. To keep the texture facing the camera I use billboard mode on the parent.

To prevent the menu intersecting with the mesh itself I use a utility layer. However it’s still kind of messy looking since it still covers or is covered by the gizmos utility layer. Sort of busy looking :thinking:

1 Like

Looks good. If you don’t want to use a utility layer then you can look more at the position relative to camera. The bounding box I mentioned above used in your PG could go like this:

scene.onPointerObservable.add((pointerInfo) => {
        if (pointerInfo.type === BABYLON.PointerEventTypes.POINTERPICK && pointerInfo.pickInfo.pickedMesh) {
            const { pickedMesh } = pointerInfo.pickInfo;
            gizmoManager.attachToMesh(pickedMesh);
            const boundingInfo = pickedMesh.getBoundingInfo();
            const sizeVector = boundingInfo.boundingBox.extendSize;
            
            console.log('bounding info:', boundingInfo);
            menuContainer.position = new BABYLON.Vector3(
                pickedMesh.position.x + (sizeVector.x / 2),
                pickedMesh.position.y + (sizeVector.y / 2),
                pickedMesh.position.z
            );
        }
    })
1 Like

Once we fully integrate the webxr layers implementation you will be able to create a layer that will do that for you. This is how fullscreen UI will work as well.

Until then you need to use a solution very similar to what you offered.

I have experienced a bit with “faking” fullscreen UI (and thus - placing a UI element) in XR mode. It’s not an optimal solution but you can see the general gist here -

Babylon.js Playground (babylonjs.com)

This is practically creating a mesh that is always the size of the screen, and placing the UI on it. I was hoping to get this implemented (using layers) for 5.0, but it will have to wait for a future 5.x release or for 6.0. until then hacks like you required will be needed to make it work correctly.

Hey @RaananW that sounds interesting. It’s the first time I’ve heard of webxr layers. Just curious, what kind of content will we be able to put on a webxr layer? Will it allow placing HTML content such as content from an iframe?

Nope, it is not meant for HTML. Think about it as a mesh shaped in specific form that has a render target texture as a material. Similar to composition layers in other (openxr-enabled) platforms. You can read about it here - layers/explainer.md at main · immersive-web/layers · GitHub

So far we support the projection layer only, but I will soon start the work for adding all other layers and allow media to be added to those layers. So video and images will perform better and look better while in an XR session.

1 Like

Hi @owen just checking in if you have any further questions :smiley:

nope. not for this. thanks!