Input Manager to rule them all

Hello,

If considered as a game engine, BabylonJS is lacking one big and essential feature: an input manager that allows the developer to abstract their code from the hardware (gamepad, mouse, touch, keyboard) when programming around user actions.

I started to discuss this possibility back to when I first met BabylonJS:

After some playing around with other engine, it seems that one of the simplest and most powerful implementation comes from Godot Engine:

Basically, you would write code that refers to an action, instead of a specific input controller:

if Input.is_action_pressed("ui_right"):
  velocity.x -= 1

And you would map this action to any kind of input using Godot’s input map, which looks like that:

Obviously, in BabylonJS, the input map would be a config file (e.g. json file, or in the .babylon file).

Although I did not have time to check the following up, I assumed that Phaser 3 would also be a great inspiration: Phaser 3 API Documentation - Class: InputPlugin

2 Likes

Sounds like a cool idea, would you fancy making a PR for it ?

That would be quite a big piece of work to be honest…

I guess there could be a new Observer named “UserActionObserver”.

That observer would be particular in the sense that it as no event to be observed at first.
The gamedev would register user actions as:

  • a name (e.g. “FireBullet”)
  • what they are observing (e.g. “K” on the keyboard, right-click on the mouse, “A” button from Gamepad)

The “UserActionObserver” would subscribed to the related observables (e.g. Scene.onKeyboardObservable, Scene.onPointerObservable) and relay a event “FireBullet” when it receives an event from these observables.

We could use “UserActionObserver” this way:


// Registering Action
// ===================
var userActionObs = new UserActionObserver();

// By default, we observe discrete events (on/off):
userActionObs.addUserAction({ name: "FireBomb",
                              observables: [ {device: "gamepad",  input: "button_A"},
                                             {device: "keyboard",  input: "key_A"},
                                             {device: "pointer",  input: "left_click"}
                                            ]
                             });

// But there may be complicated cases where we would like to link an object (e.g. camera zoom) with 
// continuous (pinch) and discrete (key press) inputs...
userActionObs.addUserAction({ name: "ZoomMap",
                              observables: [ {device: "gamepad",  input: "Axis0", repeat_rate: 0.1},
                                             {device: "keyboard",  input: "key_Z", repeat_rate: 0.1},
                                             {device: "pointer",  input: "pinch_delta" }
                                            ]
                             });

// Listening to Actions:
// ===============

userActionsObs.actions('ZoomMap').addObserver(this.myFunction)

public void myFunction(number: actionStrenght){
     // do something with strenght which is between -1 and 1 ?
}

Not sure about it.

I kind of like it :slight_smile:

@Deltakosh any thoughts ?

Well I’m not sure to get the use case. Why not just using the gamepad observables?
Or the keyboard or pointer observables?

You can still create a function that will centralize your input. The only thing you need to do is to register that function to the inputs, right?

Unless I misunderstood the overall idea? The UserActionObserver seem just like a small simplification but not sure it is worth bloating the framework with it

This really needs to happen. I am new to Babylon, but have been reading through the API and this is one of the major reasons why I’m still questioning if I should switch to using this for my projects.

I would like a keymap function where I could abstract actions to input events. I would like a file or UI where my users could select their own input for actions. For example, if hold down the right mouse button is move forward by default, my users could select to have w move forward, and either delete or keep the right mouse button.

Modifier keys are also extremely annoying to deal with. If I want any alt key, then I need to do several checks, whereas having an abstract “alt” would make this much easier.

Touch is another very difficult problem. If I want a swipe forward, I need to write all kinds of code, or use a library to do all the detection. Same for 2-finger double-tap, 4.finger, single-tap on the top of the screen, or swipe up with three fingers. The touch API is very raw, and for me, my development machine does not have a touchscreen, so it’s extremely difficult for me to push a production build to the web, to test on my phone every time I need to test a touch action.

I have built a keymapping system that has worked quite well. I have scenes register their own keymap, and I have a global keymap. A keymap looks like:

globalKeyMap.addKeyMap("scene1", [
    {event:"jump", key:"ArrowUp", mods=["shift"]},
    {event:"duck", key:"ArrowDown", mods=["ctrl"]}
])

The default is for state to be 1, pushed, but another attribute is state:0 for released.

This allows keys to be declared at the top of any file scene. I then have an onInput event that is run whenever some kind of input event happens, and that event object contains the original events that triggered the input event, abstracted items, like state for pushed and released, as well as a keymapEvent attribute which has the keymap event that was triggered from that action. Then any handlers that are listening to the onInput event can just check:

if(event.keymapEvent)

I find this structure immeasurably useful for all apps that I’m making, games or not. You never know when your user will want to change their key bindings, it makes updating the default keys much easier, and it makes your code much easier to read.

1 Like

So the idea would be to create a class that will raise observable based on this kind of struct?
{event:"jump", key:"ArrowUp", mods=["shift"]}

This will not help with touch though

Well, just replace key with input, then you have:
{event:“jump”,input:“ArrowUp”},
{event:“jump”, input:“2-finger-1-tap”}

We will need to create a touch event manager that has input names for different touches, as there are no browser names for touch events. zingtouch is my favorite touch library. We would just need to either have strings, like I do above, and or have people pass in the gesture object when creating the input map / key map.

For mouse, we could have strings like “mouseleft”, “mouseright”, “mousecenter”, “doublemouseleft”, “doublemouseright”, “doublemousecenter”.

I’m interested to see if other people find it useful.
Also, can you list all the input options would like to support?

The point of a system like this is that developers using the platform learn a few syntax, and can handle any input event.
The for sure events I would like to see handled are:

  • Keyboard
  • Mouse
  • Touch
  • VR Controllers
  • Game Controllers
  • Gyroscope

Then one could install extras like a speech input handler, AR gesture input handler, and they could all abstract to the same system.
If bloat is a worry, then the core would be:

  • Keyboard
  • Mouse
  • Touch

Then you can choose to install the handlers as extra modules.

ok but for touch and mouse, can you be more specific? The idea here is to capture a precise need if someone (including me) is motivated to take a stab at it

OK I did a first try here:

Only for keyboard so far but this will help with this discussion

Please let me know what exactly you’re looking for if this doesn’t answer your question:

Final use case:

I want users to fire a gun across all platforms. The keyboard fire key is f, they can press the right mouse button, they can tap once with 2 fingers, or they can use a on the controller. When they release any of those gestures, the gun will stop firing.

Here is the code adding those events to a keymap:

const my_keymap = new KeyMap([
	{input:"key_f", event:"fire", state:1},
	{input:"key_f", event:"stop_fire", state:0},
	{input:"rightmousebutton", event:"fire", state:1},
	{input:"rightmousebutton", event:"stop_fire", state:0}
])

//taken from the Zingtouch idea for touch events
const two_finger_touch = new TouchEvent({
	type: "tap",
	maxDelay: -1, //this is to say that the tap could go on forever
	numInputs: 2,
	tolerance: 10,
})

my_keymap.add([
	{input:two_finger_touch, event:"fire", state:1},
	{input:two_finger_touch, event:"stop_fire", state:0}
])

Here is a module that only works for keyboard in JavaScript, but the other functions are there, they just need to be tweeked a little:

/*
Manages the usage of keymaps
usage{
keymap = new KeyMap()
keymap.add({key:["f"], mods:["shift"], event:"fligh"})
*/

//functions for checking if two arrays have the same items but in a different order
const containsAll = (arr1, arr2) =>
	arr2.every(arr2Item => arr1.includes(arr2Item))
//use this to do your array same check
const sameMembers = (arr1, arr2) => 
	containsAll(arr1, arr2) && containsAll(arr2, arr1) && arr1.length === arr2.length
// sameMembers(arr1, arr2) // will return true or false


export default class KeyMap {
	constructor(keymap=[]){
		this._keymap = []

		this.add(keymap)
	}

	_addSingle(mapping={}){
		// adds a key mapping to this._keymap. either one or both of function or event need to be passed.
		let local_mapping = Object.assign({key:[], mods:[], event:null, function:null, state:1}, mapping)
		if(local_mapping["key"] && (local_mapping["function"] || local_mapping["event"])){
			local_mapping["key"] = Array.isArray(local_mapping['key']) ? local_mapping['key'] : [local_mapping['key']]
			local_mapping["mods"] = Array.isArray(local_mapping["mods"]) ? local_mapping["mods"] : [local_mapping["mods"]]
			this._keymap.push(local_mapping)
		} else{
			let missing_items = Object.keys(local_mapping).map(m=> m !== 'mods' && !local_mapping[m] ? m:null).filter(t=>t)
			missing_items = missing_items.join(', ')
			throw new Error("KeymapError: KeyMap.add was passed a mapping without a value for: " + missing_items)
		}
	}

	add(mapping={}){
		if(Array.isArray(mapping)){
			mapping.map((m)=>this._addSingle(m))
		} else{
			this._addSingle(mapping)
		}
	}

/*
	getModsSets(mods){
		`converts mods into a couple different sets and returns a list. That way users can use shift,and alt, rather than leftctrl and rightctrl.`
		nutral = []
		with_alt = []
		nutral_with_alt = []
		for mod in mods{
			if "meta" in mod{
				alt = mod.replace("meta", "alt")
				with_alt.add(alt)
				nutral.add("meta")
				nutral_with_alt.add("alt")
			elif not mod in ['left', 'right'] and ('left' in mod or 'right' in mod){
				with_alt.add(mod)
				mod = mod.replace('left', '').replace('right', '').strip()
				nutral.add(mod)
				nutral_with_alt.add(mod)
			else{
				with_alt.add(mod)
				nutral.add(mod)
				nutral_with_alt.add(mod)
		return [with_alt, nutral, nutral_with_alt]
*/

	getEvent({key=[], mods=[], state=1}={}){
		// Gets the event that is triggered by the list of key and mods.
		key = Array.isArray(key) ? key : [key]
		mods = Array.isArray(mods) ? mods : [mods]
		for(let i = 0; i < this._keymap.length; i++){
			const mapping = this._keymap[i]
			if(sameMembers(mapping['key'], key) && sameMembers(mapping['mods'], mods) && mapping['state'] === state){
				if(mapping['function']){
					return mapping['function']()
				}
				return mapping['event']
			}
		}
	}

/*
else if(mods){
				mod_sets = this.getModsSets(mods)
				for mod_set in mod_sets{
					if mapping['key'] == key and mapping['mods'] == mod_set and mapping['state'] == state{
						if mapping['function']{
							return mapping['function']()
						return mapping['event']
*/
}

Ok did you check the code I linked. From a keyboard standpoint it should work as expected :slight_smile:

Yes, I need to try it first, but reading through the code it looks like what I’m going for. The reason why I have an abstracted object for the keymaps is because it keeps the number of syntax down for apps, which is very important in my opinion.
I would like to get Nodragem’s opinion on this though.
I would say the spec for my ideal event manager and key mapper are:

  • To map actions and or functions to any user input event, either keyboard, touch, mouse, VR controller, game controller, or anything.
  • Have a place where one can attach listener handlers from a scene or component to catch whatever input is being done and check it for keymapEvents
  • Allow batch entry of keymap objects (to reduce code and make entry of new events easier)
  • Have an event object passed to the input handlers that has the original events, and properties that are standard across all events, especially the keymapEvent or keymapEvents attribute. (I’m not sure how triggering multiple keymap events should be handled)
  • Modifier keys should allow for the left and right alt keys to be apart, but also abstract them to just alt.
  • One should be able to register and unregister chunks of key mappings, if a scene changes, or if the area changes in the game.
  • There should be a way to view and modify any registered keymappings, so users can change them. It may also be useful if the keymaps could come from a JSON file that the user can modify.
  • Handle repeated events, like double taps, pressing “h” 3 times, or double-clicking.
  • Handle held events, like holding your finger on the screen for 3 seconds, holding down up arrow, or holding down the right mouse button.
  • Handling combo commands, like double tapping and holding, or holding “f” and clicking the right mouse button.
  • Handling movement events, like moving the mouse to a position on the screen, or moving your finger in a particular way. Note that I think if a mouse had a “swipe” property, it would probably fit this need.
  • Allow for new types of input to be easily registered, like if a user had speech input, speed input, gyroscope input, sensor input, or camera input that Babylon doesn’t yet support.

What XR does is exactly what I’m looking for.

1 Like

Pinging @Nodragem for thoughts

Please guys, feel free to do PR on that branch. I will add support for mouse soon so we can see how it goes but if you want it to work as you wish you will have to help

Here is a code example of what could be done:

    var hub = new BABYLON.InputHub();

    hub.addEvent(`Yeah`, [
        { device: BABYLON.InputHubDevice.Keyboard, keyCode: 13, altModifier: true }
    ]);

    hub.onEventObservable.add(evt => {
        if (evt === "Yeah") {
            foo();
        }
    })

    var foo = () => {
        console.log("yeah!")
    }
1 Like

This is really late in this thread for me to start, but do not think this makes my plans much easier. For webXR, which can do hand tracking, or ExoKit’s hand tracking wrapper for Magic leap, I was planning on trying to determine if the location of finger bones on either hand had “violated” the space occupied by a button (implemented as a planar mesh). When this occurs trigger a click, https://www.youtube.com/watch?v=DZvzXd7Y06s&feature=youtu.be

I am not sure there is such a thing as an AR gesture handler. The only gesture I plan on doing anyway is criss-cross hands to summon the interface portal. The callback is the easy part. Figuring out to when to run it is the hard part.

1 Like

Other example:

    var hub = new BABYLON.InputHub();

    hub.addEvent(`Yeah`, [
        { device: BABYLON.InputHubDevice.Keyboard, keyCode: 13, altModifier: true },
        { device: BABYLON.InputHubDevice.Keyboard, keyCode: 32, shiftModifier: true, rightModifier: true },
        { device: BABYLON.InputHubDevice.Keyboard, keyCode: 82, released: true },
    ]);

    hub.onEventObservable.add(evt => {
        if (evt === "Yeah") {
            foo();
        }
    })

    var foo = () => {
        console.log("yeah!")
    }
1 Like

I really like this idea, when I get some time ill try to help out with some controler cases. There is just way to much on my plate at this moment.