I’m moving a project of mine from 4.1.0 to 4.2.0, where I have several layers of input going on - a canvas where the scene is being rendered, with a UI built from html on top.
Previously I used the scene’s InputManager attachControl function to hook the mouse movement event to a layer on top of the UI, and called it a second time to hook up the mouse up/down events to the main scene.
manager.detachControl();
manager.attachControl(false,false,true, pointerElement); // only attaches the mouseMove event
manager.attachControl(true,true,false); // attaches mouseDown and mouseUp to the main canvas
This meant you could start dragging to move the camera on the 3d parts without starting it when clicking on the UI, but still remain dragging when the mouse moves over the UI.
A change in 4.2.0 made it so that when you call attachControl, it automatically detaches itself if it was called before, preventing this kind of setup.
I can understand having the check there to prevent a particular event being attached several times so that detach can work correctly, but there are three different events here and you can choose which ones get attached - a blanket check like this instead of one for each means that even if you’re not duplicating attachments you can’t mix targets.
How should I deal with this? I’m using a minified distribution so it’s not exactly easy to tweak, and I’d rather not resort to extending things because the InputManager is already created and managed internally by the scene.
Hey @TTFTCUTS, would you be able to provide a Playground (using 4.1.0, selectable in the upper right corner by clicking on Latest or whichever version is visible) illustrating what you’re currently doing?
If I’m understanding correctly, the effect that you’re going for is to have your dragging action work, event as it’s going over the UI element. Is that correct?
I’m not entirely sure I can - at least not effectively - since adding elements to the playground is a bit weird (persisting between runs etc)
I’ve been using the scene’s onPointerObservable to do custom camera movement instead of the camera’s default controls because I want it to conform to terrain and other constraints, and this stops when it hits other page elements which are on top of the canvas since it appears to be listening just to events on the canvas (which is why I attached it to another element over the UI), but with a little testing it seems those default camera controls already deal with having their own move events work on the whole window.
Is there perhaps an equivalent way to the scene’s onPointerObservable to bind the mouse move to the whole page? If I understand correctly, the pointer stuff is dealing with more events than just specifically mouseMove, for the sake of touch inputs and such? I’m not 100% on all the extra things that it does.
Edit: I realise I should probably show you the actual project in lieu of a working playground - it can be found here - ok → select testlevel → start - in 4.2.0 dragging the camera around stops when you move over any of the UI elements such as the tracker at the top
Edit 2: Here is where the events are being listened for, and here is the corresponding place where it’s being acted upon - the simplest solution would be just listening for the mousemove normally on window, as mouseout is there, but I don’t know what things would be affected by the change given the input manager seems to do an awful lot of things when you attach it.
So after looking at the code and trying to understand what’s going on. Here’s what I’m seeing. In the scene’s InputManager
, the detachControls
function call was added to attachControls
, as a defensive measure to prevent incomplete logic from being used when handling input (eg. if pointermove events were active for the first call but disabled in a subsequent call, remove move listeners to match what inputs the user requests).
Just out of curiosity, is there any reason why you’re separating the event handling at the InputManager level? I just want to make sure that I understand what’s going on because there should be setPointerCapture
calls in the InputManager that will allow dragging to continue, even over external divs. It might be best to just leave the manager inputs attached as default and just detach controls from the camera so that you still have access to all potential options for onPointerObservable
.
Mostly because I haven’t worked out any other way - since I’m not using the default camera controls I went to the scene’s onPointerObservable as a source of input, but that doesn’t seem to give any mousemove events when moving over html elements on top of it, which I would expect from events tied to the canvas under those elements.
If it was pointed at an element over the UI, clicks on the UI would pass through to the scene when I don’t want them to. If there’s another way to separate these things then I am very much open to alternative solutions.
I also tried listening for the window’s mouseMove events, but for whatever reason they don’t fire when a mouse button is held, so I can’t detect dragging at all.
Edit: looking up setPointerCapture
, I’ve added it and releasePointerCapture
to my mouseDown/Up handlers when dragging starts and ends, and it seems to be doing what’s needed, though now my tooltips are acting strangely. Will investigate more.
Edit 2: It seems that when the mouseMove events are tied to the canvas, when you move over an element which is above it, the coordinates in the event come out as -1,-1
Edit 3: After changing up how the tooltip determines mouse position (the UI was using its own mouseMove listener already so I just took from there), things appear to be working how they should now. Thanks for the info on setPointerCapture
- it was something that I didn’t know about, and has fixed the problem.
1 Like