Ray Casts break across Universal Cameras' viewports

In my project I have two Universal Cameras. One is the main UniversalCamera, which is swapped out for an ArcRotateCamera when looking at the model, while the UniversalCamera is used when navigating in the model. When testing, though, I just move the UniversalCamera to where the ArcRotateCamera should be and keep the UniversalCamera on, so for all intents and purposes, just consider the main camera as a UniversalCamera. I have another stationary UniversalCamera as a minimap. I keep the minimap’s camera stationary in orientation by not attaching its controls to the canvas. In order to make both cameras show up, especially since one is overlaid on top of another, I used the scene.activeCameras array, and appended the minimap camera to the end of that array.

The issue is that when I go to Pick meshes using scene.onPointerObservable(), the meshes can be picked in the minimap camera, but not via the mainCamera. It SEEMS, that they can be picked with the mainCamera, but all the meshes that are “picked” (verified in console logs) do not align with what the shown “clicked on with mouse” meshes that my pointer was hovering over in the canvas. I think the issue is because the ray casts’ coordinates are utilizing/based on the minimap viewport’s origin. So the raycasts are still trying to click meshes in the scene based on those coordinates and not the coordinates of the mainCamera viewport.

I noticed some old forum posts about raycasts across viewports before, but none seemed to address the issue, especially in a scenario where I can keep the minimap overlaid on top of the mainCamera viewport in the canvas, and still click on meshes across both viewports. Does anyone have any ideas on how to solve this issue?

I was going to try to convert my own project to a playground, but I borrowed and augmented this playground instead since it is set up nice and will illustrate the point more efficiently, I think.

My initial thoughts are somehow detecting a pointerDown and getting the coordinates of the pointerDown in relation to the html canvas, and then using that to determine if the pointerDown coordinate is within one viewport or another. And then once that is determined, somehow set the raycast in relation to that one viewport or the other if that is at all possible. But since the viewports sit WITHIN the canvas, as produced by the engine, I am not sure if that can easily be determined via canvas coordinates. Any and all suggestions are appreciated. Thanks in advance!

@PolygonalSun might be able to help with this one

Hey @johntdaly7, the way that scene.onPointerObservable is set up is that it can only provide picking information from one camera at a time. You can explicitly define this wtih scene.cameraToUseForPointers. As for picking between both your minimap and your main camera. I think that your approach of detecting on a pointerDown could work. It just depends on how you’ve set your viewports up. For each viewport, you’ll need four values: X, Y, width, and height (all values from 0 to 1, representing a percentage of the canvas). With scene.onPointerObservable, we can get the absolute location of the cursor. Using those 4 values for the viewports, we can determine which camera we’re clicking on. Check out lines 69 to 86 of this PG ( PIP Camera Visual Demo | Babylon.js Playground (babylonjs.com)) for an example of this. Because it’s at the edge of the screen, we don’t need to check for the top and right of the PiP camera but the idea is the same.

1 Like