GeospatialCamera and Genericized Camera Inputs

I stumbled across the Experimental GeospatialCamera and incorporated it into my work-in-progress Tile Map Server client using Babylon. It’s showing promise.

I wanted to add Touch and Pinch support so I used the ArcRotateCameraPointersInput as a starting point.

ArcRotateCameraPointersInput is already almost generic enough to be usable with other cameras, if a few tweaks were added.

Most of the code is supporting internal state information and not application specific.

A big question relating to abstracting/separating camera from inputs: why are some properties available on the camera (e.g. pinchToPanMaxDistance, ) and some are available on the input (e.g. angularSensibilityX, pinchPrecision, static MinimumRadiusForPinch)? It looks like for the ArcRotateCamera, properties on the ArcRotateCameraPointersInput are given accessors on the camera. Is that really necessary? It seems to unnecessarily tie the ArcRotateCamera to specifically-coded PointersInput.

It seems plausible to adapt a generic TouchInput class to work with a variety of cameras so that adding specific input controls is less tedious.

For the touch control at the end of the playground, I am simply outputting the parameters to a popup box to play around with the control to see how it works.

The primary functions that a user would need to define are:

  • _computeMultiTouchPanning(previousMultiTouchPanPosition, multiTouchPanPosition)
  • _computePinchZoom(previousPinchSquaredDistance, pinchSquaredDistance)
  • getClassName
  • getSimpleName

The list of properties that control this input method include

  • panningSensibility
  • pinchInwards
  • multiTouchPanAndZoom
  • multiTouchPanning
  • pinchZoom
  • useNaturalPinchZoom
  • pinchDeltaPercentage
  • pinchPrecision
  • angularSensibilityX
  • angularSensibilityY
  • static MinimumRadiusForPinch
  • (super?) _ctrlKey
  • camera._useCtrlForPanning
  • _isPanClick

The PointersInput does a pretty good job of isolating the functionality within the class and keeping track of its own state with respect to multi-touch pinch vs zoom. I’ve renamed it to GeospatialTouchInput, but it could be adapted so touch-based pinch/zoom could then easily be incorporated into other cameras. I’m not sure if it should be adapted as an inherited class or with a constructor that takes the customized methods, or an instance to which the user assigns custom methods.

I have some recommendations on improving GeospatialCamera, but I’ll save those for another post.

1 Like

cc @georgie

Hey @HiGreg thanks for posting! You’re speaking to a problem that is very top of my mind, as i am currently working on some changes to the geospatial camera system to de-obfuscate how the camera is controlled by user (so that the properties to control it are not spread across input classes and camera) and so that the classes have very clear responsibilities (input classes gather input deltas , camera handles recalculating matrices to account for per-frame movement, and new movement class is responsible for translating pixel deltas into per- frame movement, taking limits/ speed/ intertia into account. It should be live by EOW. After that gets pushed I’ll ping here so that you can incorporate the new API into your project (as its experimental it will not be back compatible) and we can see how it relates to your suggestions!

for the feedback about geocam, it’s possible I’m currently addressing but feel free to share :slight_smile:

2 Likes

What you’ve written sounds great!

Issues I have with the current GeospatialCamera are

  • the forced use of “pick,”
  • the spinning during pan at/near north and south poles,
  • Zoom doesn’t always stop at surface+altitude

I’d prefer to not have to use pick and instead use a position/radius.

I’m not sure if my ideas are fully baked, but here they are:

  • clearly define reference points, such as
    • forward (e.g. lookAt, target) as TransformNode or Vector3
    • up,
    • geoCenter as TransformNode or Vector3
  • persuant to “clear responsibilities,” offer a few different ways to move the camera, such as
    • straight-line movement (vector3 supplied, could be forward)
    • movement along radius around geoCenter (specified as 2d tangent vector, but what reference point?)
    • optional update forward Vector3 unless forward is TransformNode.
    • zoom by forward movement, or by fov
    • (possible) zoom to radius/intersection then remaining zoom by fov
    • pivot axis/angle around specified Vector3 point with optional “change look” afterwards.

Then if generic input classes are defined that accept callbacks (or used as extended classes), the input actions can be tied to the desired selection of camera actions. (“generic” in the sense of being able to be used with various cameras snd tying them to desried camera actions.)

I kind of prefer zoom by specifying “amount to increase the size of a (hypothetical) object at specified distance.” But the math is more complicated than moving the camera position by a Vector3.

For a distance d, a width of one unit (world space) is:

2*d*Math.tan(camera.fov/2)

(see my dolly zoom post to see changing fov and zoom such that width at a distance is constant.)

I’m very interest in seeing your next release of GeospatialCamera!

A few more observations after digging into cameras and inputs. These are just what I hope to help implement.

I don’t prefer picking but I do think it is useful. I discovered that it is part of _applyZoom, which itself is within the input/render loop. That makes sense if the zoom is part of wheel movement. For touch input and pinch to zoom, Pick should be done only on the initial onMultitouch, not continuously during pinching. (I’m coding a touch-aware input class.)

It might be useful if multiple inputs on a camera can coexist. This might only be an issue with wheel versus pinch, but I also speculate keyboard input might conflict with keyboard modifiers on mouse movements. POINTER events aggregate both mouse and touch, so it makes sense to watch out for conflicts between inputs responding to pointer, mouse, and/or touch. Each input class should (IMO) be able to stand alone and play nicely with others.

Many parameters are sensibility based. Is it prudent to gather those into an object? I’d guess that would be an input object, but then how can the interaction between different settings on different inputs look? Can they be correlated when needed?

Can the target change dynamically? If I have two spheres in the scene, can I easlily switch the camera to reference one sphere or the other? At minimum, the radius/altitude could be different.

How are bumpmaps or height maps accomodated?

It would be interesting to be able to mimic an ArcRotateCamera or an ArcRotateCamera with look != target, or an ArcRotateCamera that doesn’t lock at poles. Could this also be a substitute for UniversalCamera or FreeCamera? Maybe I don’t know enough of the inner details, but it will be interesting to explore.

Last thing for now: the mechanism of adding an input through inputManager.add() creates nested function call for each input.checkInputs() in the chain. Seems like this would be better implemented as an array of checkInputs functions that are called in a forEach loop on the array. What about something like:

attachedInputs.forEach(i=>i.?checkInputs())

@georgie, looking forward to your updated GeospatialCamera!