Second Camera does not call onLeftPickTrigger Action

I can repro the issue with the picking needing a mesh behind.
Once, you’ve clicked on the gizmo, you can rotate the camera and clicking the gizmo works even without the mesh behind.
But if you pan the camera so the mesh is not behind, then clicking doesn’t work anymore.

Any idea @PolygonalSun ?

This is why you need a real golden path example app built from start to finish, a first person shooter will be a good start.

While the docs look awesome with many cool sounding features, like multi camera, multi canvas same engine, etc. In my humble experience, every time I try something with multilple, I end up wasting several days, and eventually go back to multiple engines (eg. one engine - one canvas - one camera, and simply overlay canvases to achieve multiple views).

There is always some annoying corner case bug. Mostly, to do with ArcRotateCamera controls.

I can take a quick look and see what’s happening but I’m wondering if actual picking isn’t happening against the HUD camera, at least initially.

If you set cameraToUseForPointers = cameraHUD (line 19) then cameraHUD’s meshes are pickable from the beginning… Or if you set isPickable to false for the box instead, that fixes it too…

EDIT: also, I suppose you could call scene.pick manually for the HUD camera and set cameraToUseForPointers to the main camera (or visa versa)… for a workaround? :thinking:

Already tried all of those tricks. The purpose of HUD camera is to help with editing and picking in the main scene. Proposed solution would defeat that purpose by disabling picking in the main scene.

My experience so far with Babylon is that multiviews (be it multi canvas, multi viewport, multi scene, etc, - and I have tried and tested them all) has never work for a real world app, because a real world use case requires composable interactions, not just readonly views of fixed canvas size and one input control.

F.Y.I. 3D configurator with HUD cameras use case similar to Blender axes - drag anything into the scene (from the catalog to the right), then edit it

For example, in multicanvas mode, all canvases must be of the same size, else it would not work. This is because Babylon Camera controls were written in procedural object oriented way, not as modular components that can be mixed and matched with other elements.

For this bug, I already circumvented it by not using multiple camera at all, just old school separate engine canvas overlay, then had to manually sync that HUD engine’s camera with the main scene’s camera. Quick and dirty solution, but less headache and gets the job done.

I could see making use of the cameraToUseForPointers and the position of the cursor, with respect to the main canvas. For example, using the cameraHUD’s dimensions, you could swap cameraToUseForPointers to use cameraHUD when hovering its position and swap back to camera when it leaves:

scene.onPointerObservable.add((ev) => {
        let rect = scene.getEngine().getRenderingCanvasClientRect();
        let x = ev.event.clientX - rect.left;
        let y = ev.event.clientY -;

        if (x >= (0.85 * rect.width) && y <= (0.15 * rect.height)) {
            scene.cameraToUseForPointers = cameraHUD;
        else {
            scene.cameraToUseForPointers = camera;
    }, BABYLON.PointerEventTypes.POINTERMOVE);

Here’s a slightly modified form of the PG to demonstrate this: Playground Default Camera Axes | Babylon.js Playground (

Unfortunately, this would also mean having to manually maintain that value so that the proper camera context is used for picking.

1 Like

Also gone through this scenario, it’s not suitable because I do not know the exact position of the HUD Camera.

The layout uses flex, that means when the screen width is too narrow, the right HUD Camera wraps to the next line below the left HUD Preview camera.

Plus I want to make the HUD width/height dynamic too for different devices. Having adhoc logic means inflexible code that’s hard to change.

So far to day the system is designed to be as performant as it can be and we do not iterate each camera for this reason. It has nothing to do with the style and way of coding. Basically only one camera supports picking and actions by default. If you need more you could use pick and so on manually but I agree it is loosing some interesting default functionnality.

I would totally be open to a PR addressing this which could be as simple as adding a flag to support multicam and in this mode ensuring than scene.pick would iterate through the list of cameras and for each mesh related to their masklayer do the normal picking and only keep the closest mesh.

This still sounds like a niche feature but I d be glad to help and provide guidance if somebody from the community wants to tackle it ?

Also may I ask why the multiscene approach does not work ?

1 Like

Do you mean multiple scenes same engine? It does not work because each Engine can only render in one canvas size at anytime (i.e. you can change canvas size, but cannot have two canvases of different sizes simultaneously sharing an engine).

I will look into refactoring Camera controls / scene picking to be more modular and submit PR, but that’s more like in the far future, because currently my todo list is full of critical features to deliver first.

This is exactly what viewport would help to achieve ? I could add this in my previous playground if you want ?

Yes, I chose multiple viewports because it circumvented the canvas size issue, but then ran into Camera controls issue - the topic of this bug report.

Yes, would be interested to see how you solve this Blender style axis control use case:

  1. Axis matches main camera view angles
  2. Dragging Axis changes main camera angles
  3. Clicking Axis sphere changes main camera angle to that axis
  4. Main Camera works with all standard behaviors:
  • Camera zoom, pan, keyboard events, etc.
  • Picking
  • Gizmo
  • Action managers

Something like this might work despite I agree a more manual setup


Wow, another ingenius hack, Seb! Thank you.

I was not aware that you could utilityLayerRenderer.setRenderCamera(camera) to solve the issue for multiple cameras .

It’s not clean, and it’s not multi input control, but will definitely work for my use case (hopefully there won’t be problems with cursor reconciliation, because in the app, I have more custom cursor states than just pointer).

This should be much more performant than spinning a new engine for each HUD tool, because I wanted to add several editing tools, similar to Adobe Illustrator, but in 3D this way.

Yup wont be bullet proof but you get the idea if you do not run into other limitations…

I tried this multi Scene, custom Viewport approach. Unfortunately, it still doesn’t work because every time you click on an Axis, the main Scene registers that as a picking event and deselects the Gizmo/or selects another Mesh (I need all states in the main Scene to remain unaffected by HUD tools, unless explicitly changed).

I tried to stop event propagation, but seems like by the time you get onLeftTrigger observable, it’s already too late.

This bug requires a major refactor to Camera controls, there is no hack that will work without creating more bugs.

The correct behavior would be to have multiple Camera input controls, each behaving like standard HTML DOM elements, swallowing the PointerEvent and then propagating the event up the DOM (i.e. to Cameras behind), if not stopped.

What about simply adding array camerasToUseForPointers to the Scene class, which if it exists would be iterated over until the first match is found? It seems that would solve the issue at hand and could maybe be a fairly straightforward PR to implement…

OTOH maybe using whichever mesh is closest would be better, but I’m not sure how to calculate that closeness value? :thinking:

It’s better to check the order of scene.cameras, this way people can control priority of picking.


scene.activeCameras = [mainCamera, axisCamera, customToolCamera]

// Picking order:

  1. customToolCamera → if nothing picked, pass event to axisCamera (can be prevented with event.stopPropagation())
  2. axisCamera → if nothing picked, pass event to mainCamera (can be prevented with event.stopPropagation())
  3. mainCamera - receives picking event if 1 and 2 did not pick anything and did not call event.stopPropagation()

Then you also need to refactor Gizmo and all other tools relying on scene.activeCamera because currently Babylon sets this to the last rendered camera after each render loop, in this case it incorrectly sets to customToolCamera which will brake GizmoManager, unless explicitly declared.

That’s why I said is a major refactor.

Iterating over the activeCameras in reverse order sounds goods to me, to give priority to what’s drawn in front… Let’s see what @sebavan thinks about that… And I’d volunteer to help with that fist piece of the puzzle for starters, processing only the 1st hit (the one that’s most in front) when the feature is enabled.

Adding the ability to have multiple hits which can be cancelled by calling a stopPropagation function sounds like a great second PR request, once the first part is in place. And that last part about GizmoManager, etc. currently relying on activeCamera incorrectly would be left for a separate PR(s) as well, if needed. Otherwise, if it needs to be one big PR that handles all of that, it will be too much for me to take on all at once RN. :slight_smile:

I am totally ok to introduce smthg for this but could we wait until 5.0 is released ? This might be quite a large change.

From what I have seen scene.pick would be the most central place (or onMove, and onDown… in inputManager) I guess we could have a mode, like firstCameraWin or distance based knowing the hit distance is part of the pickingInfo and by default letting it like it is.

You could create a PR that might depending on its overall complexity stay in wait mode for a while if that s ok with you ?

1 Like

Waiting until after 5.0 is fine with me, it will be much better to do it with your feedback and ideas in real time IMO. :slight_smile:

1 Like