In my project I have two Universal Cameras. One is the main UniversalCamera, which is swapped out for an ArcRotateCamera when looking at the model, while the UniversalCamera is used when navigating in the model. When testing, though, I just move the UniversalCamera to where the ArcRotateCamera should be and keep the UniversalCamera on, so for all intents and purposes, just consider the main camera as a UniversalCamera. I have another stationary UniversalCamera as a minimap. I keep the minimap’s camera stationary in orientation by not attaching its controls to the canvas. In order to make both cameras show up, especially since one is overlaid on top of another, I used the scene.activeCameras array, and appended the minimap camera to the end of that array.
The issue is that when I go to Pick meshes using scene.onPointerObservable(), the meshes can be picked in the minimap camera, but not via the mainCamera. It SEEMS, that they can be picked with the mainCamera, but all the meshes that are “picked” (verified in console logs) do not align with what the shown “clicked on with mouse” meshes that my pointer was hovering over in the canvas. I think the issue is because the ray casts’ coordinates are utilizing/based on the minimap viewport’s origin. So the raycasts are still trying to click meshes in the scene based on those coordinates and not the coordinates of the mainCamera viewport.
I noticed some old forum posts about raycasts across viewports before, but none seemed to address the issue, especially in a scenario where I can keep the minimap overlaid on top of the mainCamera viewport in the canvas, and still click on meshes across both viewports. Does anyone have any ideas on how to solve this issue?
I was going to try to convert my own project to a playground, but I borrowed and augmented this playground instead since it is set up nice and will illustrate the point more efficiently, I think.
My initial thoughts are somehow detecting a pointerDown and getting the coordinates of the pointerDown in relation to the html canvas, and then using that to determine if the pointerDown coordinate is within one viewport or another. And then once that is determined, somehow set the raycast in relation to that one viewport or the other if that is at all possible. But since the viewports sit WITHIN the canvas, as produced by the engine, I am not sure if that can easily be determined via canvas coordinates. Any and all suggestions are appreciated. Thanks in advance!