I’m trying to figure out how to catch a pinch gesture in react native. I’ve found this thread Implementing custom pinch gesture in AR - #8 by sebavan and tried to implement that on react native. What’s the correct way of enabling pointer events in typescript react native?
doesn’t work since it wants a whole lot more options:
enablePointerSelectionOnAllControllers
disablePointerUpOnTouchOut
forceGazeMode
xrInput
The first three are just boolean, so I could easily figure those out but the xrInput is a little confusing to me. I’m sending this through scene.createDefaultXRExperienceAsync and I don’t see how I’d already have a xrinput thing since xr isn’t ‘created’ yet?
So I think I’ve found the solution, I keep forgetting that I can just use other react native libraries in combination with Babylon
So just using PinchGestureHandler | React Native Gesture Handler is working out great.
Also, @PolygonalSun is doing more work right now to make more of the Babylon.js input system work with Babylon Native, so when that work is done it might be sufficient to simply use an ArcRotateCamera which already supports pinch zoom.
Hi! Thanks for elaborating, I wanted to use pinch in AR though, if I wanted to make it work in the browser as well, is there another camera I could use? Or could ArcRotateCamera still catch the events even when I’m in AR?
Not a camera… ArcRotateCamera handles input and moves the camera relative to some point in space (e.g. the center of a model). If you are in AR, the camera is controlled by the pose of the physical device in the physical space, so I think what you want is some component that handles input and uses it to manipulate an actual model (translate, rotate, or scale the model). As far as I know Babylon.js does not have anything like that built in, but @Deltakosh may know for sure.
There is a very interesting thread where we discuss the exact same thing -
TL;dr - AR allows you to receive pointer events for both fingers, and you can use those two points to do your magic. Since we don’t currently provide the screen coordinates, the best way would be to check the distance (in 3d) between the two XR Input points (the two fingers) and use the deltas to scale a model (for example).
I also mention there that it is possible to get the screen coordinates, it just takes a bit of computation that might slow down the AR render loop. If that’s the missing piece I can show you how to project the 3D point to screen space.
Hi again, actually I could do with some help getting the 3d space to 2d coordinates, I haven’t been able to do a deep dive into the matrices involved.
As for my progress, I had another idea for this where I suspended a transparent plane in the scene and parented it to the camera so it was always static relative to the screen, and then used texture coordinates taken from rays that hit it to replace the pointers. I had a lot of help from this thread:
HOWEVER the next issue i faced was that i couldn’t get multiple points at the same time using this technique - it just duplicated the first coordinate. So still no pinch…
So that leaves another possible avenue to get screen coordinates, if theres a way to get multiple texture coordinates from different rays at the same time. Or maybe some sort of multi-mesh grid, though that seems a bit wasteful