How does node.getDirection() exactly work, and how does camera.setTarget() adjust the camera's rotation behind the scenes?

Hey everyone!

So I am revisiting an old problem that I’ve had and have posted about before. I figured it would be good to post again in a new, refreshed post for easier following and a fresh thought process.

The situation is thus:

I have an office space in Blender. I have Empty Plain Axis Nodes in Blender to represent positions that I would like to toggle between to move my camera in the model (“camera anchors”). These camera anchors have actual Camera objects nested underneath them in Blender, just to test the camera orientation to make sure it is looking in the desired direction. I adjust the camera anchor node’s position and rotation in order to represent where I want the camera to move to/to orient like in Babylon.

When I load the Blender-exported glb into Babylon, I utilize a process I have been suggested in the past to position and orient a UniversalCamera upon a UI click to select a particular camera anchor to move the UniversalCamera to.

The process is as follows:

var newCameraPosition = new Vector3(0,0,0);
let dir = configAreaCameraAnchor.getDirection(new Vector3(0, -1, 0));
var newCameraTarget = newCameraPosition.add(dir); = newCameraPosition;;

The previous forum posts that his has been discussed in are listed below:

This process works for all of the camera anchors that are within the office space. However, it does NOT work for the camera anchor that is the one that shows the entire floorplan of the office.

It seems that the issue is when the x and y rotations (in Blender) are 0. This seems to throw off the result of these lines of code:

let dir = configAreaCameraAnchor.getDirection(new Vector3(0, -1, 0));
var newCameraTarget = newCameraPosition.add(dir);;

It DOES work, however, if the x and y rotations (in Blender) are NON-ZERO.

For instance, when the camera anchor rotation quaternion (w,x,y,z) is (.707, 0 ,0 ,.707), the camera rotation does NOT show up the same in Blender as it does in Babylon, as illustrated below:

rotation in Blender:

rotation quaternion settings:

how the rotation appears in Babylon:

Example of the rotation, if the x and y rotations are NON-ZERO:

rotation in Blender:

rotation quaternion settings:

how the rotation appears in Babylon (correct):

I know that Blender and Babylon have different handed coordinate systems, but doesn’t seem to be the issue, as this works fine if the x and y axis rotations are non-zero.

This finally leads me to the questions:
I feel it would greatly help me troubleshoot this problem if I understood how setTarget adjusts the camera’s rotation/rotationQuaternion behind the scenes. Can someone point me to where this code is in the github? I cannot seem to find it.

I also don’t feel like I fully understand why these two lines

let dir = configAreaCameraAnchor.getDirection(new Vector3(0, -1, 0));
var newCameraTarget = newCameraPosition.add(dir);

enable the camera to be targeted towards a direction that is perfectly lined up with the camera anchor’s x,y, and z rotations??

Isn’t (0,-1,0) the down vector? Why would getting the direction of the down vector (which always prints out as still (0,-1,0) when printing ‘dir’) and adding it to the camera’s position (and getting the camera to point at that vector) perfectly orient the camera the same way as the camera anchor rotation quaternion?

Thanks in advance for any explanations and clarifications!!

The code of setTarget is in TargetCamera:

1 Like

Thank you for the reply, @Evgeni_Popov. I made a playground that I think more succinctly illustrates the issue I am having.

Note: The .blend file that associated with the model(s) in the playground is located in a comment in the playground.

What I am trying to do is orient the camera in the same position and rotation as the cameras I set up in the .blend file. I have the cameras nested under Empty Axis nodes in Blender. I adjust the empty nodes’ position and rotation, and keep the cameras’ positions and rotations both at 0,0,0, so that the parent node controls their orientation. In Babylon, I try to utilize these values to orient the camera in the same manner as the “cameraAnchors”.

However, there is consistently an issue with the rotation of the camera around the Y axis. This can be seen in that the rotated topdown camera anchor version of the model, which has the topdown camera anchor rotated 90 degrees ((0,0,90) in Blender), still has the Babylon camera facing the same direction (0,0,0) as the regular model. Additionally, when you move the camera to one of the upside down camera anchors (upside-down, as seen in the .blend file), the cameras are still oriented “up” instead of flipped upside down (the ground plane should always be below the camera, no matter which side you are on). The upside-down cameras are supposed to be rotated 180 degrees about the y axis, and yet they are not. Since the topdown camera anchor is supposed to be rotated 90 degrees about the y axis as well, and is not, it confirms that the issue is that the y axis rotation is not being updated/matched properly.

I tried to deduce where this issue is with the y axis. Below is the associated code and global variables (as far as I could tell), starting with the setTarget function you graciously provided me.

public _initialFocalDistance = 1;
public _camMatrix = Matrix.Zero();
public _referencePoint = new Vector3(0, 0, 1);
public _transformedReferencePoint = Vector3.Zero();
private _defaultUp = Vector3.Up(); //-> Returns a new Vector3 set to (0.0, 1.0, 0.0)

public setTarget(target: Vector3): void {

    this._initialFocalDistance = target.subtract(this.position).length();

    if (this.position.z === target.z) {
        this.position.z += Epsilon;


    Matrix.LookAtLHToRef(this.position, target, this._defaultUp, this._camMatrix);

    this.rotation.x = Math.atan(this._camMatrix.m[6] / this._camMatrix.m[10]);

    var vDir = target.subtract(this.position);

    if (vDir.x >= 0.0) {
        this.rotation.y = (-Math.atan(vDir.z / vDir.x) + Math.PI / 2.0);
    } else {
        this.rotation.y = (-Math.atan(vDir.z / vDir.x) - Math.PI / 2.0);

    this.rotation.z = 0;

    if (isNaN(this.rotation.x)) {
        this.rotation.x = 0;

    if (isNaN(this.rotation.y)) {
        this.rotation.y = 0;

    if (isNaN(this.rotation.z)) {
        this.rotation.z = 0;

    if (this.rotationQuaternion) {
        Quaternion.RotationYawPitchRollToRef(this.rotation.y, this.rotation.x, this.rotation.z, this.rotationQuaternion);

(the below is found in:

 * Sets the given "result" Matrix to a rotation matrix used to rotate an entity so that it looks at the target vector3, from the eye vector3 position, the up vector3 being oriented like "up".
 * This function works in left handed mode
 * @param eye defines the final position of the entity
 * @param target defines where the entity should look at
 * @param up defines the up vector for the entity
 * @param result defines the target matrix

public LookAtLHToRef = function (eye, target, up, result) {
    var xAxis = MathTmp.Vector3[0];
    var yAxis = MathTmp.Vector3[1];
    var zAxis = MathTmp.Vector3[2];

    // Z axis
    target.subtractToRef(eye, zAxis);

    // X axis
    Vector3.CrossToRef(up, zAxis, xAxis);
    var xSquareLength = xAxis.lengthSquared();
    if (xSquareLength === 0) {
        xAxis.x = 1.0;
    else {

    // Y axis
    Vector3.CrossToRef(zAxis, xAxis, yAxis);

    // Eye angles
    var ex = -Vector3.Dot(xAxis, eye);
    var ey = -Vector3.Dot(yAxis, eye);
    var ez = -Vector3.Dot(zAxis, eye);
    Matrix.FromValuesToRef(xAxis._x, yAxis._x, zAxis._x, 0.0, xAxis._y, yAxis._y, zAxis._y, 0.0, xAxis._z, yAxis._z, zAxis._z, 0.0, ex, ey, ez, 1.0, result);

// Same as Tmp but not exported to keep it only for math functions to avoid conflicts

var MathTmp = /** @class */ (function () {
function MathTmp() {
MathTmp.Vector3 = Misc_arrayTools__WEBPACK_IMPORTED_MODULE_2_[“ArrayTools”].BuildArray(6, Vector3.Zero);
MathTmp.Matrix = Misc_arrayTools__WEBPACK_IMPORTED_MODULE_2_[“ArrayTools”].BuildArray(2, Matrix.Identity);
MathTmp.Quaternion = Misc_arrayTools__WEBPACK_IMPORTED_MODULE_2_[“ArrayTools”].BuildArray(3, Quaternion.Zero);
return MathTmp;

private _cachedRotationZ = 0;
private _cachedQuaternionRotationZ = 0;

public _getViewMatrix(): Matrix {
    if (this.lockedTarget) {

    // Compute

    // Apply the changed rotation to the upVector
    if (this.rotationQuaternion && this._cachedQuaternionRotationZ != this.rotationQuaternion.z) {
        this._cachedQuaternionRotationZ = this.rotationQuaternion.z;
    } else if (this._cachedRotationZ != this.rotation.z) {
        this._cachedRotationZ = this.rotation.z;

    Vector3.TransformCoordinatesToRef(this._referencePoint, this._cameraRotationMatrix, this._transformedReferencePoint);

    // Computing target and final matrix
    this.position.addToRef(this._transformedReferencePoint, this._currentTarget);
    if (this.updateUpVectorFromRotation) {
        if (this.rotationQuaternion) {
            Axis.Y.rotateByQuaternionToRef(this.rotationQuaternion, this.upVector);
        } else {
            Quaternion.FromEulerVectorToRef(this.rotation, this._tmpQuaternion);
            Axis.Y.rotateByQuaternionToRef(this._tmpQuaternion, this.upVector);
    this._computeViewMatrix(this.position, this._currentTarget, this.upVector);
    return this._viewMatrix;

protected _computeViewMatrix(position: Vector3, target: Vector3, up: Vector3): void {
    if (this.ignoreParentScaling) {
        if (this.parent) {
            const parentWorldMatrix = this.parent.getWorldMatrix();
            Vector3.TransformCoordinatesToRef(position, parentWorldMatrix, this._globalPosition);
            Vector3.TransformCoordinatesToRef(target, parentWorldMatrix, this._tmpTargetVector);
            Vector3.TransformNormalToRef(up, parentWorldMatrix, this._tmpUpVector);
        } else {

        if (this.getScene().useRightHandedSystem) {
            Matrix.LookAtRHToRef(this._globalPosition, this._tmpTargetVector, this._tmpUpVector, this._viewMatrix);
        } else {
            Matrix.LookAtLHToRef(this._globalPosition, this._tmpTargetVector, this._tmpUpVector, this._viewMatrix);

    if (this.getScene().useRightHandedSystem) {
        Matrix.LookAtRHToRef(position, target, up, this._viewMatrix);
    } else {
        Matrix.LookAtLHToRef(position, target, up, this._viewMatrix);

    if (this.parent) {
        const parentWorldMatrix = this.parent.getWorldMatrix();
        this._viewMatrix.multiplyToRef(parentWorldMatrix, this._viewMatrix);
    } else {

I’m not sure why the yAxis rotation is turning out funky, but my hunch is because the UpVector is used for the X axis rotation calculation in the LookAtLHToRef function, and the X axis and Z axis vectors are used to calculate the Y axis vector. I’ve been trying to dig into the formatting and the Matrix objects that are used, but I could use some help to figure out what is going on. For instance, why does the calculated matrix need to be inversed?

Would I have to make the “upVector” negative? I tried that and it didn’t seem to work, though I am open to someone proving me wrong. But I don’t think that would solve the issue with the 90 degrees rotation about the y axis not being applied, even on the positive side of the ground plane.

Additionally, I was recommended, I think by @PirateJC in a different forum post to use:

var dir = cameraAnchor.getDirection(new BABYLON.Vector3(0, -1, 0));
var newCameraTarget = newCameraPosition.add(dir);

when trying to match the camera’s rotation up with the cameraAnchor transformNode’s rotation.

I’ve tried tinkering with this and looking at the function, but cannot figure it out in my head – How does using the down vector direction and adding it to the camera’s position (which would result in the camera’s new target being a spot 1 unit below the camera’s new position (cameraAnchor’s position)), end up with the camera facing in the same orientation as the cameraAnchor, but only for cameraAnchors with non-zero x, y, and z values and orientations aligned with the upVector? Any light shed on this matter would be most appreciated!

Thanks in advance!

Have you tried to set scene.useRightHandedSystem = true; just after scene creation?

Your problems could come from the fact .glb files are in a right handed coordinate system whereas Babylon is left handed by default.

Yes, I was suggested that in a similar forum post but that did not work, and it also jumbled up all the positions of meshes and face orientations of textures so that they were reversed.

@sebavan Do you know who wrote the setTarget() code for targetCamera and the functions it’s dependent on, who may be able to shine some light? Thanks!

@johntdaly7 as Babylon is a community project and camera is a core feature, it has been written and updated by a lot of people :slight_smile:

If you provide a min repro of the weird behaviour the community will be happy to have a look at it. I guess your repro is probably a bit over whelming to jump on at the moment ?

1 Like

Thanks @sebavan, and I know, just wasn’t sure if someone “owned” that particular code-base as the manager, or who got it started/did most of the coding.

And I already provided a repro that I think illustrates the issue pretty well. It’s provided in my long post above Evgeni’s. For convenience, here it is again: It’s not that long. Just has some code commented out to show my struggles/thought patterns and attempts (like attempting using vs rotation vs rotationQuaternion). Links to the .blend and .glbs are there too in comments.

1 Like

@RaananW who played a lot with cameras might have some clues ?

It feels like a classic gimbal lock issue (especially since we are using euler angles in setTarget) but that’s just an uneducated guess.

Thanks for the demo. I am trying to understand which one of the 8 anchors doesn’t work correctly, and what would be the expected behavior. If I read correctly, the top-view anchor (‘t’ button) is the one that behaves incorrectly. But Unless I am mistaken it behaves as expected. I am sorry if you already explained it but it would be great if you told me how exactly i can reproduce the issue and what is the expected behavior.

A few things I noticed:

  1. first, the playground registers an event listener, but doesn’t remove it from the dom element. which means every time you run the playground a new event listener is added. press play 10 times, the code will run 10 times on every click. This playground eliminates this: testing camera anchors | Babylon.js Playground (
  2. I checked the anchors and extracted their absolute rotation (including the root rotation due to left-handed system). I then used this rotation and applied it to the camera (which is parentless) when the position change has ended (eliminating the setTarget calls). but this seems to fail whenever I try using it: testing camera anchors | Babylon.js Playground ( Should we expect this to work as well?
1 Like

Hey @RaananW, thanks for testing it out and making that fix with the event listener.

Here are some videos that show the orientation of the cameras in the blend file (the link for the blend file is located at the top of the playground, and also here:

For the glb that doesn’t have the topdown camera anchor rotated around the y axis (‘cameraAnchorRotationTest.glb’), this is the video:

For the glb that DOES have the topdown camera anchor rotated 90 degrees around the y axis (‘cameraAnchorRotationTest_topdown_rotatedZ90.glb’), this is the video:

As you can see by comparing the two videos with the playground, the camera anchors that the camera does not match up with include:

BottomUp, 05, 06, 07, and 08 for ‘cameraAnchorRotationTest.glb’,


TopDown, BottomUp, 05, 06, 07, and 08 for ‘cameraAnchorRotationTest_topdown_rotatedZ90.glb’.

The pattern that exists across all of these anchors is that they all have a non-zero y-axis rotation (z-axis in Blender).

Additionally, if you utilize the absoluteRotationQuaternion (to avoid gimbal lock) of each cameraAnchor (which should print out as non-zero (and verifiable by switching the rotation mode in the blend file to display the rotation quaternions)), that does not seem to work either (though anyone is more than welcome to correct me on this in case I missed something! :slight_smile: ).

Hey @RaananW, just checking if you’ve been able to take a look at this and have any thoughts at all. Thanks so much!

Oh! so sorry! totally slipped between the cracks…
I’ll assign it to me to not forget again and will try going over this tomorrow.

Sorry again!!

So, a few things about your scene .

First, here is a kind of a solution - testing camera anchors | Babylon.js Playground (

When setting the camera’s parent to be the anchor you are getting the expected behavior (i assume?).

The reason it didn’t work with copying the position and rotation is that the camera should already have a rotation in order for it to work (the camera’s rotation was set with the setTarget call when it was created). so taking the anchor’s rotation was not enough - the initial camera rotation (180 y and 90 on x) should be applied.

I believe the universal camera is not the right fit for you. an arc rotate camera would fit better ( testing camera anchors | Babylon.js Playground (

In general, you need to pay attention to the fact that the camera needs to have the target set after having its position applied. the direction based on the node’s rotation is incorrect, due to the base rotation of the camera.

1 Like

Thank you for looking into this, @RaananW! This does seem to solve things, at least in regards to the camera’s end position and rotation when transitioning between them. However, I need to be able to animate between positions, which is why I was taking the rotation-adjustment route. Do you see a solution to this, using the parenting method? I don’t see how that could work.

Lastly, I appreciate the suggestion about the arcRotateCamera. I am actually employing an arcRotateCamera, but I need a UniversalCamera most of the time. I have a floorplan, and the UniversalCamera is used to move freely around the floorplan when inside the model, and also to switch between camera anchors that are oriented in the model to highlight specific areas in the model. Then there’s a floorplan camera anchor, which sits high above the model and looking down. For smooth transitions, I want to be able to animate the Universal Camera out of the model (from a camera anchor or random position and orientation inside the model) to the position and rotation of the floorplan camera anchor above the model. Once that animation has smoothly occurred to that spot, I want to seamlessly transition between the UniversalCamera and the ArcRotateCamera, which should have the same position and rotation.

The issue was that the UniversalCamera was animating to a rotation and position one way (not always correct), but the arcRotateCamera was always orienting another way. So I figured there had to be an issue with rotating about the Y axis. That’s why I made that playground to explore how it worked.

However, I’m still not quite sure I understand what you mean by this:

" The reason it didn’t work with copying the position and rotation is that the camera should already have a rotation in order for it to work (the camera’s rotation was set with the setTarget call when it was created). so taking the anchor’s rotation was not enough - the initial camera rotation (180 y and 90 on x) should be applied."

If you apply a position and rotation to an object, shouldn’t that just change the orientation of the object to be that position and rotation? Is it possible to setTarget to null to override it, or (and what I thought I was doing) setTarget in a manner that aligns with the rotation of the TransformNode cameraAnchor?

I thought that was the point of the lines below that I was initially suggested to use, as noted above (and which does work for the camera anchors that have a non-zero rotation on every axis):

var dir = cameraAnchor.getDirection(new BABYLON.Vector3(0, -1, 0));
var newCameraTarget = newCameraPosition.add(dir);

Though, I can’t for the life of me understand why adding the DOWN vector to the position ends up with a rotation that matches the anchor (even if the anchor isn’t oriented down) via the use of setTarget(). I guess (and looking into the code seems to be the case) it’s because setTarget makes some assumptions and hard codes some things, which is where I thought the issue might lie, which is why I shared the code I did above, pertaining to setTarget.

In your initial playground you set the camera’s target right after initializing it. this generates a certain direction in which the camera is looking it. The direction is, of course, translated to a position (0,4,0) and a direction (x:90, y:180, z:0). This is the scene init.
When you apply the position and rotation of the object, you lose those two initial values, which your anchors expect you to have. Your anchors are configured so that if a camera with a position of positive y and a rotation on the X and Y will be attached to them, the camera will look in the right direction. And this is exactly what the parenting system is doing. If you are taking the anchor’s transformation and apply it to the camera, the camera loses its direction. now it has the anchors orientation and needs to “look at” the right object. But actually, it needs this positive translation and orientation to look at the object correctly.
If you want to animate this instead of setting the parent, give the camera a transform node parent. keep the camera’s orientation as it is, and set the transform node’s transformation to be exactly that of the anchor node. Now you have a before and after values which you can animate.
The reason behind the -1 vector, IMO, is the positive camera position that is required in order to view the scene correctly. Try setting the camera’s initial position to 0,0,0, and you will see what i mean.
This is just a guess, of course. this will require a complete analysis of all of the values - the glb’s anchors, the model’s properties, everything.

Thank you for elaborating on this some more, @RaananW. I think I have some more analysis to do to fully understand what you’re talking about regarding the setTarget issue.

Regarding the animation. I’m a bit confused as to how you set that up. I created a transformNode and set it as the camera’s parent. But the rotation is still off. This playground doesn’t include this, but I assume you mean animate the TransformNode, and not the camera, correct? See here:

If I use the parenting method with the TransformNode, I shouldn’t have to manually adjust the camera’s target as well, right?
Using setTarget always to the ground position wouldn’t work anyways because I won’t always have the camera anchors pointing towards the ground like I do in this playground. I tried the previously suggested method of using the down vector direction, but that didn’t work either:

I’m just trying to get the orientation correct before I animate the camera’s parent TransformNode between its current transformation and the camera anchor transformation.

Any insight would be appreciated. Thanks again!

It really depends on your scene and configuration.

I can’t really comment on your scene’s configuration (or why it is off), but this is the expected behavior according to the export from blender.

If you want to animate, you should animate the transform node, yes. to get the correct values you can set the parent, compute the world matrix, decompose it to rotation and position and then set the values back to their original state. now you have a before and after state and you can animate them.