How to enable touch input for ArcRotateCamera on Babylon React Native

I’m currently using GitHub - BabylonJS/BabylonReactNative: Build React Native applications with the power of Babylon Native to build a POC of a customiser we’ve built using babylon.js on the web before.
Kudos for the insane work on babylon.js and babylon-react-native :100::100::100:.

Hi lowkey_guy,

Sounds like a cool use case! Paging @ryantrem, our #1 expert on BabylonReactNative.

1 Like

Thanks @lowkey_guy for the kind words!

Short answer: Work is still under way on ArcRotateCamera (among other things) to be able to correctly receive and process touch input in both the browser and native runtime context. @PolygonalSun can you share the latest on this, or any expected timelines?

Long answer: Babylon.js was originally written to get its input directly from the DOM, which doesn’t scale well to Babylon Native. To solve this, DeviceSourceManager was introduced to the Babylon.js API. When running in the browser, DeviceSourceManager gets input through the DOM, and when running in a native app, it gets input through the native view. Today, app code can use DeviceSourceManager without needing to know what context it is running in. The work that is still under way is reworking various Babylon.js code (such as ArcRotateCamera) to get input from DeviceSourceManager rather than directly from the DOM.


Amazing :slight_smile: !
Just curious if there is a place we can track what feature is implemented on the native thing and what is not !
We at Scapic would definitely like to contribute to Babylon Native Project !

Also unrelated, but all the features on PBRMaterial are implemented on React Native?


Just curious if there is a place we can track what feature is implemented on the native thing and what is not

Most things that are tracked are tracked in the Babylon Native GitHub issues: Issues · BabylonJS/BabylonNative · GitHub
Some things that support Babylon Native but are done in the Babylon.js codebase are in Babylon.js GitHub issues: Issues · BabylonJS/Babylon.js · GitHub
Things that are specific to the integration with React Native are tracked in the Babylon React Native GitHub issues: Issues · BabylonJS/BabylonReactNative · GitHub

We probably should get better at tagging GitHub issues as features… If you think something is missing, feel free to post here or log a new issue.

We at Scapic would definitely like to contribute to Babylon Native Project

We’d love any contributions you are willing to make! If there are specific areas you’d like to contribute to, let us know and we can help get you started. We could probably also do a pass over our current issues and do a better job of marking help wanted issues or issues that are good to start with. Would one of those options work?

Also, if in the mean time you want to experiment with processing lower level input directly, your code could look something like this:

const deviceSourceManager = new DeviceSourceManager(engine);
deviceSourceManager.onAfterDeviceConnectedObservable.add(deviceEventData => {
  if (deviceEventData.deviceType === DeviceType.Touch) {
    const deviceSource = deviceSourceManager.getDeviceSource(deviceEventData.deviceType, deviceEventData.deviceSlot)!;
    deviceSource.onInputChangedObservable.add(inputEventData => {
      if (inputEventData.inputIndex === PointerInput.LeftClick && inputEventData.currentState === 1) {
        // process touch down event
      } else if (inputEventData.inputIndex === PointerInput.LeftClick && inputEventData.currentState === 0) {
        // process touch up event
      } else if (inputEventData.inputIndex === PointerInput.Horizontal || inputEventData.inputIndex === PointerInput.Vertical) {
        // process touch move event

Yes, this would definetly help,

Will definetly have a look.

Will definetly dabble with this.

So the plan for this is to get all of the Camera and InputManager work done for the 4.2 release. So far, the code for InputManager is pretty much done but the camera work is taking a bit longer because of issues with losing backwards compatibility.


@lowkey_guy circling back here, we did a pass over the issues and marked some as good first issue. Of course if there are other issues that are more pertinent to you that are not represented in the issues list, please feel free to log issues and/or make contributions towards fixing those issues! :grin:

Hello @ryantrem, sorry about the thread bump, but how would i go about capturing pinch gestures?

Hi @langel - can you clarify, are you looking to:

  1. Have touch input managed automatically for ArcRotateCamera in the context of Babylon React Native (the original topic).
  2. Receive raw touch input from DeviceSourceManager (as in the example I provided earlier in this thread) but convert that into touch gestures like pinch (or rotate, or pan, etc.)?

Sorry if i didn’t make it very clear. What i was looking for was a way to capture/process a pinch or pan touch input with DeviceSourceManager, like the example you posted above, so i can do something similar to the ArcRotateCamera zooming in and out functionality or to scale up and down. Without having to use React Native Gesture Handler to do so. I figured it out already, got it somewhat close, but not very accurate yet tho :

async function createInputHandling() {
      var numInputs = 0;
      var previousDiff = 0;


      deviceSourceManager?.onDeviceDisconnectedObservable.add((device) => {

      deviceSourceManager?.onDeviceConnectedObservable.add((device) => {
        if (device.deviceType === DeviceType.Touch) {
          const touch: DeviceSource<DeviceType.Touch> =
          touch.onInputChangedObservable.add((touchEvent) => {
            const diff = touchEvent.previousState - touchEvent.currentState;

            if (model?.isEnabled()) {
              if (numInputs === 1) {
                if (touchEvent.inputIndex === PointerInput.Horizontal) {
                  model.position.x -= diff / 1000;
                } else {
                  model.position.z += diff / 750;
              // Panning do rotation.
              else if (
                numInputs === 2 &&
                touchEvent.inputIndex === PointerInput.Horizontal &&
                touchEvent.deviceSlot === 0
              ) {
                model.rotate(Vector3.Up(), diff / 200);
              } else if (
                numInputs === 2 &&
                touchEvent.inputIndex === PointerInput.Vertical &&
                touchEvent.deviceSlot === 0
              ) {

                let input1 = device.getInput(0);
                let input2 = device.getInput(1);
                let upperTouch = 0;
                let downerTouch = 0;

                if (input1 < input2) {
                  upperTouch = input1;
                  downerTouch = input2;
                } else {
                  upperTouch = input2;
                  downerTouch = input1;
                let diff = downerTouch - upperTouch;

                if (diff < previousDiff) {
                  //zoom out
                  model.scaling = new Vector3(
                    (model.scaling.x -= 0.03),
                    (model.scaling.y -= 0.03),
                    (model.scaling.z -= 0.03)
                if (diff > previousDiff) {
                  //zoom in
                  model.scaling = new Vector3(
                    (model.scaling.x += 0.03),
                    (model.scaling.y += 0.03),
                    (model.scaling.z += 0.03)
                previousDiff = diff;

Ok gotcha. If your ultimate goal is to just get the same behavior as ArcRotateCamera in Babylon React Native, then the best solution would just be ArcRotateCamera properly functioning in Babylon React Native. :slight_smile: For that, @PolygonalSun should have the latest info.

If you have a more broad goal of recognizing touch gestures (pan, pinch, twist, etc.) and doing some arbitrary behavior (such as manipulating a model), then I think something like what you have done above will be required. For that I would recommend using an existing JavaScript gesture recognition library. Ideally such a library would exist that is not tied to DOM APIs (where you could just call functions to communicate pointer state changes), but unfortunately I’m not aware of any existing libraries work this way. The next best bet would be to convert DeviceSourceManager events into DOM events and feed those into an existing JavaScript gesture recognition library. This is the route that I have taken for one of the projects I work on, and I’m happy to share my code as an example:

import {
} from '@babylonjs/core';
import {
} from 'react';
import {
} from '../../utility/AsyncEffect';
import {
} from '../../utility/AsyncLock';
import {
} from "../../utility/CancellationToken";

declare const global: any;

// This stubs out the minimal surface area of the global document object to make Hammer.js functional.
const documentStub: Partial<Document> = {
    createElement: (): HTMLElement => {
        const style: Partial<CSSStyleDeclaration> = {
            touchAction: "",
        const element: Partial<HTMLElement> = {
            style: style as CSSStyleDeclaration,
        return element as HTMLElement;

// This class fulfills the contract of EventTarget which is the part of HTMLElement that Hammer.js actually cares about.
// It lets us route our own input through this contract so Hammer.js can receive raw input and translate it to gestures.
class SyntheticEventTarget implements EventTarget {
    private readonly handlerMap = new Map<string, Set<EventListenerOrEventListenerObject>>();

    // Hammer.js uses this to allow movement outside of an HTML element to continue contributing to a gesture.
    // For our purpose, we can simply treat this EventTarget as its own parent window.
    public get parentWindow(): any {
        return this;

    public dispatchEvent(event: Event): boolean {
        const handlers = this.handlerMap.get(event.type);
        if (handlers) {
            handlers.forEach(handler => {
                if ('handleEvent' in handler) {
                } else {

        return true;

    public addEventListener(eventType: string, listener: EventListenerOrEventListenerObject): void {
        let handlers = this.handlerMap.get(eventType);
        if (!handlers) {
            handlers = new Set<EventListenerOrEventListenerObject>();
            this.handlerMap.set(eventType, handlers);

    public removeEventListener(eventType: string, listener: EventListenerOrEventListenerObject): void {
        const handlers = this.handlerMap.get(eventType);
        if (handlers?.delete(listener) && handlers.size === 0) {

export enum Gesture {

export enum GestureStage {

export type GestureEvent<T extends Gesture> =
    T extends Gesture.tap ? { gesture: T, x: number, y: number } :
    T extends ? { gesture: T, stage: GestureStage.start | GestureStage.end, x: number, y: number } :
    T extends Gesture.pan ? { gesture: T, stage: GestureStage, x: number, y: number, xDelta: number, yDelta: number } :
    T extends Gesture.twist ? { gesture: T, stage: GestureStage, rotation: number, rotationDelta: number } :
    T extends Gesture.pinch ? { gesture: T, stage: GestureStage, scale: number, scaleDelta: number } :

export type GestureEventHandler = (gestureEvent: GestureEvent<Gesture>) => void;

export function useGestureRecognition(
    deviceSourceManager: DeviceSourceManager | undefined,
    onGestureRecognized: GestureEventHandler | undefined,
): boolean {

    const asyncLock = useRef(new AsyncLock()).current;
    const [isEnabled, setIsEnabled] = useState(false);

    useEffectAsync(async (cancellationToken: CancellationToken) => {
        if (deviceSourceManager && onGestureRecognized) {
            // Hammer.js executes code at import time that expects to find the global document object.
            // To deal with this, define a stubbed out global document, then dynamically import Hammer.js,
            // then remove the stubbed out global document (as it would otherwise affect other libs, like Babylon).
            global.document = documentStub;
            const { TouchInput, Tap, Press, Pan, Pinch, Rotate } = await import('hammerjs');
            global.document = undefined;

            if (!cancellationToken.cancelled) {
                const eventTarget: EventTarget = new SyntheticEventTarget();

                // Configure the gestures we want to recognize.
                const recognizers: RecognizerTuple[] = [
                    [Tap, { time: 100 }], // If the time between touch down and touch up is less than 100ms, consider it a tap.
                    [Press, { time: 100 }], // If a touch is down for more than 100ms, consider it a press.
                    [Rotate, {}, ['pan']], // Rotate is allowed to happen at the same time as pan.
                    [Pinch, {}, ['rotate', 'pan']], // Pinch is allowed to happen at the same time as rotate and pan.

                // Instantiate Hammer, passing it the "synthetic" EventTarget as an HTMLElement, and configure it for touch input and with the recognizers defined above.
                const hammer = new Hammer(eventTarget as unknown as HTMLElement, { inputClass: TouchInput, recognizers: recognizers });

                // Tap handler
                    hammer.on("tap", (gestureEvent: HammerInput) => {
                        onGestureRecognized({ gesture: Gesture.tap, x:, y: });

                // Press handler
                    hammer.on("press pressup", (gestureEvent: HammerInput) => {
                        const stage = gestureEvent.type === "press" ? GestureStage.start : GestureStage.end;
                        onGestureRecognized({ gesture:, stage: stage, x:, y: });

                // Pan handler
                    let lastPan = { x: 0, y: 0 };
                    hammer.on("panstart panmove panend", (gestureEvent: HammerInput) => {
                        const stage =
                            gestureEvent.type === "panstart" ? GestureStage.start :
                                gestureEvent.type === "panend" ? GestureStage.end :

                        if (stage === GestureStage.start) {
                            lastPan =;

                        onGestureRecognized({ gesture: Gesture.pan, stage: stage, x:, y:, xDelta: - lastPan.x, yDelta: - lastPan.y });

                        lastPan =;

                // Rotate handler
                    let firstRotate = 0;
                    let lastRotate = 0;
                    hammer.on("rotatestart rotatemove rotateend", (gestureEvent: HammerInput) => {
                        const stage =
                            gestureEvent.type === "rotatestart" ? GestureStage.start :
                                gestureEvent.type === "rotateend" ? GestureStage.end :

                        if (stage === GestureStage.start) {
                            firstRotate = lastRotate = gestureEvent.rotation;

                        onGestureRecognized({ gesture: Gesture.twist, stage: stage, rotation: gestureEvent.rotation - firstRotate, rotationDelta: gestureEvent.rotation - lastRotate });

                        lastRotate = gestureEvent.rotation;

                // Pinch handler
                    let lastScale = 0;
                    hammer.on("pinchstart pinchmove pinchend", (gestureEvent: HammerInput) => {
                        const stage =
                            gestureEvent.type === "pinchstart" ? GestureStage.start :
                                gestureEvent.type === "pinchend" ? GestureStage.end :

                        if (stage === GestureStage.start) {
                            lastScale = gestureEvent.scale;

                        onGestureRecognized({ gesture: Gesture.pinch, stage: stage, scale: gestureEvent.scale, scaleDelta: gestureEvent.scale / lastScale });

                        lastScale = gestureEvent.scale;

                const afterDeviceConnectedObserver = deviceSourceManager.onDeviceConnectedObservable.add(deviceEventData => {
                    if (deviceEventData !== undefined && deviceEventData.deviceType === DeviceType.Touch) {
                        const changedTouchDeviceSource: DeviceSource<DeviceType.Touch> = deviceSourceManager.getDeviceSource(deviceEventData.deviceType, deviceEventData.deviceSlot)!;
                        changedTouchDeviceSource.onInputChangedObservable.add(inputEventData => {
                            // 'Touch' is the contract for DOM touches.
                            const touches: Array<Partial<Touch>> = [];
                            const changedTouches: Array<Partial<Touch>> = [];

                            // We need to report all active touches, so enumerate all touch device sources.
                            for (const touchDeviceSource of deviceSourceManager.getDeviceSources(DeviceType.Touch)) {
                                const touch: Partial<Touch> = {
                                    identifier: touchDeviceSource.deviceSlot,
                                    clientX: touchDeviceSource.getInput(PointerInput.Horizontal),
                                    clientY: touchDeviceSource.getInput(PointerInput.Vertical),
                                    target: eventTarget,

                                // Changed touches should only include the source of the current touch event.
                                if (touchDeviceSource === changedTouchDeviceSource) {

                                // Touches should include all touches, including the source of the current touch event.

                            const event: Partial<TouchEvent> = {
                                type: inputEventData.inputIndex === PointerInput.LeftClick ? (inputEventData.currentState === 0 ? "touchend" : "touchstart") : "touchmove",
                                touches: touches as unknown as TouchList,
                                changedTouches: changedTouches as unknown as TouchList,

                            eventTarget.dispatchEvent(event as Event);


                return (isMounted: boolean) => {

                    if (isMounted) {

        return undefined;
    }, asyncLock, [deviceSourceManager, onGestureRecognized]);

    return isEnabled;

Some notes on this:

  • It’s a custom React hook, and you just pass in a DeviceSourceManager and a gesture handler callback.
  • It uses some of our other constructs for dealing with asynchrony, but you could simplify it by replacing the await import('hammerjs') with a require('hammerjs') and just using a regular synchronous useEffect.
1 Like

Thank you very much! this looks a way better alternative! I need both ArcRotateCamera and base scene interaction, like moving models and such, i will look into what you suggested. For the ArcRotateCamera i was thinking about wrapping the EngineView into a React Native Gesture Handler view and doing something like that: GitHub - EvanBacon/expo-three-orbit-controls: 🎥 Three.js Orbit Controls (Camera) bridged into React Native. What do you think?

Adding @PolygonalSun who is hard at work on the inputs part at the moment :slight_smile:

I know that we do have these type of orbit controls on the JS side and once we get Babylon.js’ InputManager to work with the Native side, these type of Orbit controls should be available once the Native repos are updated with the new code. The plan is to have these changes in for 5.0 (releasing in Spring, iirc). That might be a bit far out, depending on your needs though.