1

Edit: it is "backwards" to me - I may be missing some intuition

Given a glm transform function such as glm::translate, the two parameters are first a matrix m and then a vector v for translation.

Intuitively, I would expect this function to apply the translation "after" my matrix transform, i.e. multiplying an object by the returned matrix will first apply m followed by the translation v specified.

This intuition comes from the fact that one usually builds a transformation in mathmetical order e.g. first compute a scale matrix, then apply rotation, then transform etc. so I would think the function calling order would be the same (i.e. given a matrix, I can simply call glm::translate to apply a translation which happens after my matrix's transform is applied)

However, as mentioned in this thread, that is not the case - the translation is applied first, followed by the matrix m passed in.

I don't believe this has anything to do with column major/row major convention and notation as some threads suggest. Is there a historical reason for this? It just seems a bit backwards to me and I would probably rewrite the functions unless there's a good enough reason for it.

Gary Allen
  • 1,218
  • 1
  • 13
  • 28
  • I think you understand the thread wrong. GLM acts as you expect. In the thread the multiplying of the matrices in the shader is in the wrong order – RoQuOTriX Aug 03 '21 at 11:13
  • I once wrote about the order of transformations. It was about image transformations, but I used matrices as well, and the principle (concerning the order) is the same like in 3d. Maybe, it helps: [Rotate an image in C++ without using OpenCV functions](https://stackoverflow.com/a/56985104/7478597) – Scheff's Cat Aug 03 '21 at 11:16

1 Answers1

4

This intuition comes from the fact that one usually builds a transformation in mathmetical order

But there is no such thing as a mathematical order. Consider the following: v is an n-dimensional vector and M a n x n square matrix. Now the question is: which is the correct multiplication order? And that depends on your convention again. In most classic math textbook, vectors are defined as column vectors. And then: M * v is the only valid multiplication order, while v * M is simply not a valid operation mathematically.

If v is a column vector, then it's transpose v^T is a row vector and then v^T * M is the only valid multiplication order. However, to achieve the same result as before, say x = M * v, you have to also transpose M: x^T = v^T * M^T.

If M is the product of two matrices A and B, what we get here due to the non-commutative way of matrix multiplication is this:

x = M * v
x = A * B * v
x = A * (B * v)

or, we could say:

y = B * v
x = A * y

so clearly, B is applied first.

In the transposed convention with row matrices, we need to follow (A * B)^T = B^T * A^T and get

x^T = v^T * M^T
x^T = v^T * B^T * A^T
x^T = (v^T * B^T) * A^T

So B^T again is applied first.

Actually, when you consider the multiplication order, the matrix which is written closest to the vector is generally the one applied first.

I don't believe this has anything to do with column major/row major convention and notation as some threads suggest.

You are right, it has absolutely nothing to do with that. The storage order can be arbitrary and does not change the meaning of the matrices and operations. The confusion often comes from the fact that interpreting a matrix which is stored column-major as a matrix stored row-major (or vice-versa) will just have the effect of transposing the matrix.

Also, GLSL and HLSL and many math libraries do not use explicit column or row vectors, but use it as it fits. E.g., in GLSL you can write:

vec4 v;
mat4 M;
vec4 a = M * v;  // v is treated as column vector here
vec4 b = v * M;  // v is treated as row vector now
// NOTE: a and b are NOT equal here, they would be if b = v * transpose(M), so by swapping the multiplication order, you get the effect of transposing the matrix

Is there a historical reason for this?

OpenGL follows classical math conventions at many points (i.e. the window space origin is bottom-left and not top-left as most window systems do work), the old fixed function view space convention was to use a right-handed coordinate system (z pointing out of the screen towards the viewer, so the camera looking towards -z), and the OpenGL spec uses column vectors to this day. This means that the vertex transform has to be M * v and the "reverse" order of the transformations applies.

This means, in legacy GL, the following sequence:

glLoadIdentity();    // M = I
glRotate(...);       // M = M * R = R
glTranslate(...);    // M = M * T = R * T

will first translate the object, and then rotate it.

GLM was designed to follow the OpenGL conventions by default, and the function glm::mat4 glm::translate(glm::mat4 const& m, glm::vec3 const& translation); is explicitely emulating the old fixed-function GL behavior.

It just seems a bit backwards to me and I would probably rewrite the functions unless there's a good enough reason for it.

Do as you wish. You could set up fnctions which instead of psot-multiply do a pre-multiplication. Or you could set up all transformation matrices as transposed, and post-multiply in the order you consider "intuitive". But note that for someone following either classical math conventions, or typical GL conventions, the "backwards" notation is the "intuitive" one.

derhass
  • 43,833
  • 2
  • 57
  • 78
  • Hmmm, okay. I guess the order of the functions simply corresponds to the left-to-right order of the matrices. I was looking at the function calls as e.g. in your case first "identity-ify" the point, then rotate the point, the translate the point, when really it should be viewed as "the matrix I am creating is identity * rotate * translate". Thanks for the detailed answer! I wonder what other libraries e.g. Eigen use. I wouldn't want my functions to be entirely non-standard! Thanks again – Gary Allen Aug 03 '21 at 12:40