Can Someone Explain World Matrix to Me?

Can somebody help me understand the World Matrix of a mesh? I have been reading over the documentation but I still cannot quite grasp it. Here is where I am at in my mind:

So there is a World Origin and a Local Origin (of a mesh). The World Origin is (0, 0, 0) and everything in the World is based upon this World Origin.

Lets say I have a 1 unit cube mesh with a Local Origin of (0, 0, 0) at its center that sits perfectly on the World Origin of (0, 0, 0) at its center. If I move this cube mesh to world position (5, 5, 5). Then the World Matrix stores this Vector of (5, 5, 5)? So that Babylon know to render the cube mesh at that specific Vector…and then based on Vector…Babylon is able to access the 6 Vertices of the cube mesh relative to the cubes Local Origin and render them accordingly? Is this at least a high level understanding of what is happening?

Also does the World Matrix account for rotation as well? Or just position…assuming my thought process is even accurate in the first place.

And finally can someone help me understand this code. I know what it does as it works in my game, but I dont truly understand why it works lol:

let origin = this.camera.position; 
let wm = this.camera.getWorldMatrix();
let aimVector = Vector3.TransformCoordinates(Vector3.Forward(), wm).subtract(origin).normalize();
1 Like

Hi @dawickizer

World matrix (and matrices in general) are 16 numbers (4x4) used to transform positions and vectors from 1 reference space to another.
They contain everything to translate, rotate and scale. You’ll often see in tools like 3d modelers, engines,… some input box for specifying translate,rotate and scale of a mesh. Those values are then converted to a matrix. Math and computation are simpler when dealing with a matrix.
Mesh vertex position, normal vectors,… everything space related are multiplied by this matrix to compute the final result.
The notion of local coordinate and world coordinate is really important. local is position or vector before its transformation with a matrix.
A world matrix transforms a local coordinate to a world coordinate. Hence its name.
On the same trend, cameras also have matrices in order to convert the mesh into the camera space.
Matrices are 4 vectors that look like this for mesh matrices:

RightX, RightY, RightZ, 0
UpX, UpY, UpZ, 0
DirectionX, DirectionY, DirectionZ, 0
TranslationX, TranslationY, TranslationZ, 1

the magnitude of Right,Up and Direction vectors give the scale of the mesh

Concerning your code, origin is the position of the camera (the position of the eye)
wm, is the world matrix (camera position, orientation)

Vector3.TransformCoordinates(Vector3.Forward(), wm)

This code transform the local position Vector3.Forward with the worldMatrix giving a world position. It will be a position just in front of the camera.
Then, this world position is substracted with the camera origin in order to get a vector. This vector is the direction of the camera. Vector is then normalized so its length is 1.
This code can be simplified to:

let wm = this.camera.getWorldMatrix();
let aimVector = Vector3.TransformNormal(Vector3.Forward(), wm).normalize();

Instead of computing 2 world position and substracting, the forward vector is transformed from local to world coordinate and normalized.

Here is a good tutorial with a bunch of math but also some nice pictures that will help you understand

http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/

18 Likes

Wow, You Guys rock,
Thanks @Cedric for taking the time to write all that.
You can actually get university-level free classes for 3D in the BJS forum. :wink:
This is simply amazing && awesome support :smiley: Luv it!

5 Likes

Wow. Thanks for the thorough response. I really appreciate you taking the time to help me. Ill have to learn the fundamentals of 3D space. I have a software engineering background, but the 3D math is over my head for now haha. I do understand high level what you are saying though and Ill do my best to get caught up to speed so that I can actually understand what I am implementing in my game code!

1 Like

That was super useful to understand. We should copy/paste that into the Docs!

Cedric, I’m using Thin Instances having followed two tutorials by Jason and David, however, I don’t quite understand how color is being mapped to each instance in Davids example.
Here is David’s PG: https://playground.babylonjs.com/#PYL7JG#1

Here is my PG where I’ve captured RGBA values from an image, into an Array, and I’m trying to map that onto the Instances. https://playground.babylonjs.com/#XCYFIB#12

I figure my difficulty has to do with Matrices and Arrays…
My original thread of inquiry is here: Quest to create a Hologram with a JPEG! (how to? using Thin Instances, Matrix, Arrays) - #60 by HirangaG

1 Like

The lookup in the texture data was not correct. It need to pick the correct index un dynColorData.data.

Hologram | Babylon.js Playground (babylonjs.com)

The scaling is not right on Y, I didn’t dive that much in your code but you’ll get the idea :slight_smile:

3 Likes

Oh wow! Amazing!! Thank you!
I think I only partially understand the code, specifically I don’t quite get the decx and decy parts:
I get that it is finding the size of resulting pixel grid, given each end pixel will be a size 0.01 cube;
and therefore the original image needs to be divided into 192x192 squares…
? At this step, is the average of the img pixel colours taken, or is it just selecting the value of the pixel in the Top Left of the divided grid of pixels?
? In what order are pixels analyzed? ie. are they bottom, left to right, each row, till the top?
? Does this go back into the resulting matrix?/array? in the same order, for constructing the thin instance?

? In the decx why is multiplied by x, and then for decy, imgH is being subtracted from?
image
I’m guessing it has something to do with the read order, and then the apply to the Thin Instance order, but it would be great to understand for certain the cause… or pattern of read/write …

dec is the conversion from x world to the image space, same for decy.
imgH is inverted because image is stored upside down.

1 Like

Okay… dec make sense… Thanks!
And the images is stored upside down? …

so… if the following is a grid of pixels:
[ a, b, c, d,
e, f, g, h,
i, j, k, l]

Stores as [a, b, …k, l ] ?

But is applied to Thin instance as:
[ . , . , . , . ,
. , . , . , . ,
i , j , k , . ] <<<starts with i, j … ?

where
[ a, b, c, d, <<<this line is completed last?
e, f, g, h,
i, j, k, l]

image in uint8array stored as:
[ a, b, c, d,
e, f, g, h,
i, j, k, l]

is used to create instance by reading values as
[ i, j, k, l,
e, f, g, h,
a, b, c, d]

1 Like

Okay… I will have a play with this and see if it helps my understanding hhaha Thank you!
And that’s why you’re taking away from the imgH… As y increases, it builds from the bottom row, up… ?

I think I get it :slight_smile:

For posterity haha I did a little test:
Line 124 subtracted 100 from the max height

and then subtract 80

The mapping builds correctly, from bottom to top, thanks to that freaky bit of code @Cedric :muscle:t5:

3 Likes