Second Camera does not call onLeftPickTrigger Action

In this PG with HUD Camera, The Axis does not call
actionManager.onLeftClickTrigger when there is no mesh behind the HUD Camera.

Screen Shot 2022-01-27 at 2.18.31 AM
Screen Shot 2022-01-27 at 2.17.39 AM

cc @Cedric

Not sure how to achieve this Blender style axis handles because currently:

  1. To show the axis a second HUD camera is required to be in the scene.activeCameras list as the last camera in order to render over the main Camera
  2. This results in scene.activeCamera always get set to the second camera => leading to all picking events, and Gizmo breaking
  3. But without adding the second camera to the scene.activeCameras list, it’s not possible to interact with the axis handle (which is the purpose of this HUD camera)

How can I solve this chicken egg problem? (I want to avoid using 2D GUI, because of blurriness in retina displays)

I can repro the issue with the picking needing a mesh behind.
Once, you’ve clicked on the gizmo, you can rotate the camera and clicking the gizmo works even without the mesh behind.
But if you pan the camera so the mesh is not behind, then clicking doesn’t work anymore.

Any idea @PolygonalSun ?

This is why you need a real golden path example app built from start to finish, a first person shooter will be a good start.

While the docs look awesome with many cool sounding features, like multi camera, multi canvas same engine, etc. In my humble experience, every time I try something with multilple, I end up wasting several days, and eventually go back to multiple engines (eg. one engine - one canvas - one camera, and simply overlay canvases to achieve multiple views).

There is always some annoying corner case bug. Mostly, to do with ArcRotateCamera controls.

I can take a quick look and see what’s happening but I’m wondering if actual picking isn’t happening against the HUD camera, at least initially.

If you set cameraToUseForPointers = cameraHUD (line 19) then cameraHUD’s meshes are pickable from the beginning… Or if you set isPickable to false for the box instead, that fixes it too…

EDIT: also, I suppose you could call scene.pick manually for the HUD camera and set cameraToUseForPointers to the main camera (or visa versa)… for a workaround? :thinking:

Already tried all of those tricks. The purpose of HUD camera is to help with editing and picking in the main scene. Proposed solution would defeat that purpose by disabling picking in the main scene.

My experience so far with Babylon is that multiviews (be it multi canvas, multi viewport, multi scene, etc, - and I have tried and tested them all) has never work for a real world app, because a real world use case requires composable interactions, not just readonly views of fixed canvas size and one input control.

F.Y.I. 3D configurator with HUD cameras use case similar to Blender axes - drag anything into the scene (from the catalog to the right), then edit it

For example, in multicanvas mode, all canvases must be of the same size, else it would not work. This is because Babylon Camera controls were written in procedural object oriented way, not as modular components that can be mixed and matched with other elements.

For this bug, I already circumvented it by not using multiple camera at all, just old school separate engine canvas overlay, then had to manually sync that HUD engine’s camera with the main scene’s camera. Quick and dirty solution, but less headache and gets the job done.

I could see making use of the cameraToUseForPointers and the position of the cursor, with respect to the main canvas. For example, using the cameraHUD’s dimensions, you could swap cameraToUseForPointers to use cameraHUD when hovering its position and swap back to camera when it leaves:

scene.onPointerObservable.add((ev) => {
        let rect = scene.getEngine().getRenderingCanvasClientRect();
        let x = ev.event.clientX - rect.left;
        let y = ev.event.clientY - rect.top;

        if (x >= (0.85 * rect.width) && y <= (0.15 * rect.height)) {
            scene.cameraToUseForPointers = cameraHUD;
        }
        else {
            scene.cameraToUseForPointers = camera;
        }
    }, BABYLON.PointerEventTypes.POINTERMOVE);

Here’s a slightly modified form of the PG to demonstrate this: Playground Default Camera Axes | Babylon.js Playground (babylonjs.com)

Unfortunately, this would also mean having to manually maintain that value so that the proper camera context is used for picking.

1 Like

Also gone through this scenario, it’s not suitable because I do not know the exact position of the HUD Camera.

The layout uses flex, that means when the screen width is too narrow, the right HUD Camera wraps to the next line below the left HUD Preview camera.

Plus I want to make the HUD width/height dynamic too for different devices. Having adhoc logic means inflexible code that’s hard to change.

So far to day the system is designed to be as performant as it can be and we do not iterate each camera for this reason. It has nothing to do with the style and way of coding. Basically only one camera supports picking and actions by default. If you need more you could use pick and so on manually but I agree it is loosing some interesting default functionnality.

I would totally be open to a PR addressing this which could be as simple as adding a flag to support multicam and in this mode ensuring than scene.pick would iterate through the list of cameras and for each mesh related to their masklayer do the normal picking and only keep the closest mesh.

This still sounds like a niche feature but I d be glad to help and provide guidance if somebody from the community wants to tackle it ?

Also may I ask why the multiscene approach does not work ? https://playground.babylonjs.com/#YIU90M#621

1 Like

Do you mean multiple scenes same engine? It does not work because each Engine can only render in one canvas size at anytime (i.e. you can change canvas size, but cannot have two canvases of different sizes simultaneously sharing an engine).

I will look into refactoring Camera controls / scene picking to be more modular and submit PR, but that’s more like in the far future, because currently my todo list is full of critical features to deliver first.

This is exactly what viewport would help to achieve ? I could add this in my previous playground if you want ?

Yes, I chose multiple viewports because it circumvented the canvas size issue, but then ran into Camera controls issue - the topic of this bug report.

Yes, would be interested to see how you solve this Blender style axis control use case:

  1. Axis matches main camera view angles
  2. Dragging Axis changes main camera angles
  3. Clicking Axis sphere changes main camera angle to that axis
  4. Main Camera works with all standard behaviors:
  • Camera zoom, pan, keyboard events, etc.
  • Picking
  • Gizmo
  • Action managers

Something like this might work https://playground.babylonjs.com/#YIU90M#646 despite I agree a more manual setup

3 Likes

Wow, another ingenius hack, Seb! Thank you.

I was not aware that you could utilityLayerRenderer.setRenderCamera(camera) to solve the issue for multiple cameras .

It’s not clean, and it’s not multi input control, but will definitely work for my use case (hopefully there won’t be problems with cursor reconciliation, because in the app, I have more custom cursor states than just pointer).

This should be much more performant than spinning a new engine for each HUD tool, because I wanted to add several editing tools, similar to Adobe Illustrator, but in 3D this way.

Yup wont be bullet proof but you get the idea if you do not run into other limitations…

I tried this multi Scene, custom Viewport approach. Unfortunately, it still doesn’t work because every time you click on an Axis, the main Scene registers that as a picking event and deselects the Gizmo/or selects another Mesh (I need all states in the main Scene to remain unaffected by HUD tools, unless explicitly changed).

I tried to stop event propagation, but seems like by the time you get onLeftTrigger observable, it’s already too late.

This bug requires a major refactor to Camera controls, there is no hack that will work without creating more bugs.

The correct behavior would be to have multiple Camera input controls, each behaving like standard HTML DOM elements, swallowing the PointerEvent and then propagating the event up the DOM (i.e. to Cameras behind), if not stopped.

What about simply adding array camerasToUseForPointers to the Scene class, which if it exists would be iterated over until the first match is found? It seems that would solve the issue at hand and could maybe be a fairly straightforward PR to implement…

OTOH maybe using whichever mesh is closest would be better, but I’m not sure how to calculate that closeness value? :thinking:

It’s better to check the order of scene.cameras, this way people can control priority of picking.

Example:

scene.activeCameras = [mainCamera, axisCamera, customToolCamera]

// Picking order:

  1. customToolCamera → if nothing picked, pass event to axisCamera (can be prevented with event.stopPropagation())
  2. axisCamera → if nothing picked, pass event to mainCamera (can be prevented with event.stopPropagation())
  3. mainCamera - receives picking event if 1 and 2 did not pick anything and did not call event.stopPropagation()

Then you also need to refactor Gizmo and all other tools relying on scene.activeCamera because currently Babylon sets this to the last rendered camera after each render loop, in this case it incorrectly sets to customToolCamera which will brake GizmoManager, unless explicitly declared.

That’s why I said is a major refactor.