**(1) first, regardless of DX or OpenGL, the vectors and matrices expressed are defined according to the standard in Linear Algebra:**

"The product of matrix A and B the element C (ij) in column J of row I of matrix C is equal to the sum of the product of the corresponding elements in column J of row I of." (Practical mathematics manual, Science Press, Second Edition)

For example, C12 = A11 * B11 + A12 * B21 + A12 * b13...

**(2) After clarifying this, let's look at the "Storage Method of Matrices". There are two ways to store matrices. One is "Row-major order) /row priority ", and the other is" column Primary Order (column-Major Order)/column priority"**

1) direct3d uses row-based primary storage

"Effect matrix parameters and HLSL matrix variables can define whether the value is a row-major or column-major matrix; however, the DirectX APIs always treat d3dmatrix and d3dxmatrix as row-major. "(see section d3d9 document/casting and conversion)

2) OpenGL uses primary column Storage

"The M parameter points to a 4x4 matrix of single-or double-precision floating-point values stored in column-major order. That is, the matrix is stored as follows"

(See the msdn glloadmatrixf API description)

The storage sequence illustrates how matrices in linear algebra are stored in linear memory arrays. d3d stores each row in an array by row, while OpenGL stores each column in each row of the array:

The same matrix of Linear Algebra has different storage sequence in d3d and OGL.

Line generation: A11, A12, A13, A14 d3d: A11, A12, A13, A14 GL: A11, A21, A31, a41

A21, A22, A23, A24 A21, A22, A23, A24 A12, A22, A32, a42

A31, A32, A33, A34 A31, A32, A33, A34 A13, A23, A33, A43

A41, a42, A43, A44 a41, a42, A43, A44 A14, A24, A34, A44

(3) matrix multiplication sequence and rules

The definition of matrix multiplication in linear algebra is definite. However, there are differences between left multiplication and right multiplication in different implementations, or "pre-multiply" or "post-multiply )"

This rule depends on the vector representation, that is, the row vector or the column vector. If it is a row vector, it is actually a row matrix. The "Row x column" that represents linear algebra is the forward multiplication. The same is true for matrix multiplication.

For example, in d3d,

D3d is a row vector, row-first storage, OpenGL is a column vector, and column-first storage. Although the same matrix is stored in d3d or OpenGL, the conversion results are the same,

Because OpenGL transforms a vector as a column vector and multiply each column of the same matrix to realize the same Transformation in linear algebra.

It is usually difficult to see the code of OpenGL transformation coordinates. The following code is from OpenGL source code. Let's take a look at the "true colors" of vertex transformation"

Void fastcall _ glxform3 (_ glcoord * res, const _ glfloat V [3], const _ glmatrix * m)

{

_ Glfloat x = V [0];

_ Glfloat y = V [1];

_ Glfloat z = V [2];

Res-> X = x * m-> matrix [0] [0] + y * m-> matrix [1] [0] + z * m-> matrix [2] [0]

+ M-> matrix [3] [0];

Res-> Y = x * m-> matrix [0] [1] + y * m-> matrix [1] [1] + z * m-> matrix [2] [1]

+ M-> matrix [3] [1];

Res-> Z = x * m-> matrix [0] [2] + y * m-> matrix [1] [2] + z * m-> matrix [2] [2]

+ M-> matrix [3] [2];

Res-> W = x * m-> matrix [0] [3] + y * m-> matrix [1] [3] + z * m-> matrix [2] [3]

+ M-> matrix [3] [3];

}

It can be seen that, as described above, "the OpenGL column vector is multiplied by each column of the Matrix, and it still represents the multiplication of the linear algebra row vector and each row of the matrix"

Let's take a look at the OpenGL Matrix Multiplication. "Use each column of A to multiply each row of B ".

/*

** Compute r = a * B, where R can equal B.

*/

Void fastcall _ glmultmatrix (_ glmatrix * r, const _ glmatrix * a, const _ glmatrix * B)

{

_ Glfloat B00, B01, B02, b03;

_ Glfloat B10, B11, B12, B13;

_ Glfloat B20, B21, B22, B23;

_ Glfloat B30, b31, B32, B33;

Glint I;

B00 = B-> matrix [0] [0]; B01 = B-> matrix [0] [1];

B02 = B-> matrix [0] [2]; b03 = B-> matrix [0] [3];

B10 = B-> matrix [1] [0]; B11 = B-> matrix [1] [1];

B12 = B-> matrix [1] [2]; B13 = B-> matrix [1] [3];

B20 = B-> matrix [2] [0]; B21 = B-> matrix [2] [1];

B22 = B-> matrix [2] [2]; B23 = B-> matrix [2] [3];

B30 = B-> matrix [3] [0]; b31 = B-> matrix [3] [1];

B32 = B-> matrix [3] [2]; B33 = B-> matrix [3] [3];

For (I = 0; I <4; I ++ ){

R-> matrix [I] [0] = A-> matrix [I] [0] * B00 + A-> matrix [I] [1] * B10

+ A-> matrix [I] [2] * B20 + A-> matrix [I] [3] * B30;

R-> matrix [I] [1] = A-> matrix [I] [0] * B01 + A-> matrix [I] [1] * B11

+ A-> matrix [I] [2] * B21 + A-> matrix [I] [3] * b31;

R-> matrix [I] [2] = A-> matrix [I] [0] * B02 + A-> matrix [I] [1] * B12

+ A-> matrix [I] [2] * B22 + A-> matrix [I] [3] * B32;

R-> matrix [I] [3] = A-> matrix [I] [0] * b03 + A-> matrix [I] [1] * B13

+ A-> matrix [I] [2] * B23 + A-> matrix [I] [3] * B33;

**1. Matrix and linear transformation: one-to-one correspondence**

A matrix is a tool used to represent linear transformations. It corresponds to linear transformations one by one.

Consider linear transformation:

A11 * X1 + A12 * X2 +... + A1N * xn = x1'

A21 * X1 + A22 * X2 +... + a2n * xn = x2'

...

AM1 * X1 + AM2 * X2 +... + amn * xn = XM'

Correspondingly, the matrix representation is:

| A11 A12... A1N | X1 | x1' |

| A21 A22... a2n | X2 | X2 '|

|... | * |... | = |... |

| AM1 AM2... amn | xn | XM '|

It can also be shown below:

| A11 A21... AM1 |

| A12 A22... AM2 |

| X1 x2.... XN | * |... | = | X1 'x2 '... XM' |

| A1N a2n... amn |

Six matrices are involved. They are a [M * n], X [N * 1], x' [M * 1], X [1 * n], a [n * m], x' [1 * m].

It can be understood as vector x (x1, x2 ,..., XN) after a transformation matrix A [M * n] or a [n * m], it becomes another vector x' (x1 ', X2 ',..., XM ')).

2. Matrix Representation: row matrix vs. column matrix

The names of row and column matrices are derived self-vectors and column vectors.

In fact, matrix A [M * n] can be regarded as a row matrix consisting of M n-dimensional row vectors, or a column matrix consisting of n m-dimensional column vectors.

X [N * 1]/X' [M * 1] is equivalent to one column vector in N/m dimensions. X [1 * n]/X' [1 * m] is equivalent to a row vector in N/m dimensions.

Row matrix and column matrix are only two different representations. The former indicates that a vector is mapped to a row of the matrix, and the latter indicates that a vector is mapped to a column of the Matrix.

Essentially, it represents the same linear transformation. Matrix Operations stipulate that they can change the ing relationship through transpose operations.

3. The order of multiplication of the matrix: premultiplication or left multiplication vs. postmultiplication or right Multiplication

Note that two different representations correspond to different operation sequence:

If a column vector is transformed, the transformation matrix (row matrix/vectors) must appear on the left of the multiplication number, that is, pre-multiply, also known as premultiplication or left multiplication.

If a row vector is transformed, the transformation matrix (column matrix/vectors) must appear on the right side of the multiplication number, that is, post-multiply, which is also called post multiplication or right multiplication.

Generally, it is not wrong, because the matrix multiplication property determines that a matrix with the same inner dimension can be multiplied. Why is this rule? Why do we need to multiply row vector by column vector or column vector by row vector ??? Think about it...

Therefore, left multiplication or right multiplication is related to the representation of the transformed vector, rather than the storage order.

**4. Storage order of the matrix: store by row first vs. Store by column first**

When using a matrix in a computer, the storage matrix is first encountered.

Because the computer storage space is sequential, how to store m * n elements of a [M * n] is a problem. There are generally two types: store by row first and store by column first.

Row-Major: the sequence of saving data to A11, A12,..., and amn.

Column-Major: the sequence of saving data to A11, A21,..., and amn.

The problem is that you have a set of stored matrix elements. You do not know how to read elements to form a matrix. For example, you do not know how many columns A12 should be placed in.

Therefore, each system has its own rules. For example, if the rules are stored, they are read by the rules. DX uses row-major and OGL uses column-major. that is to say, the storage sequence of the same matrix A [M * n] In Dx and OGL is different, which causes the trouble of Inter-System Conversion.

However, in dx, vertices/vectors are represented by row vectors, so the corresponding transformation matrix is column matrix/vectors, while in OGL, point/vector is represented by column vector, so the corresponding transformation matrix is row matrix/vectors. therefore, if a vector x (x1, x2, X3, 1) or point (x (x1, x2, X3, 1) is specified in dx )) apply the matrix transformation of a [4*4], that is, x' = x (x1, x2, X3, 1) * A [4*4, so its storage sequence is A11, A12 ,..., a43, A44. In OGL, perform the same vector or point transformation because it uses row

Matrix/vectors, the transformation matrix of the application should be a' [4*4] = A [4*4] ('indicates transpose/transpose ), that is, x' = A' [4*4] * x' (x1, x2, X3, 1). However, because column-Major is used, its storage sequence is exactly A11, a12 ,..., a43, A44 !!!

In fact, for DX and OGL, the same transformation stores the same matrix element sequence. for example, the 15 elements in the 13th, 14th contain deltaz, deltay, and deltaz.

Refs:

Http://mathworld.wolfram.com/Matrix.html

Http://www.gamedev.net/community/forums/topic.asp? Topic_id = 321862