The physical meaning of matrix and its transformation, eigenvalues and eigenvectors
Recently in the clustering of the use of PCA technology, which involves some of the matrix eigenvalues and eigenvectors of the content, on the Internet to find an article on the characteristics of the vector and its physical significance of the article, collation down, share.
First, Matrix Foundation
[1]:
A matrix is an array that represents a two-dimensional space, and a matrix can be seen as a transformation. In linear algebra, a matrix can transform one vector to another, or from one coordinate system to another. The "base" of a matrix is actually the coordinate system used to transform it. The so-called Similarity matrix (), is the same transformation, but the use of a different coordinate system. The similarity matrix in linear algebra is actually to make these similar matrices have a good appearance, without changing the function of their transformations.
Although the matrix is two-dimensional, we usually refer to the size of the matrix as the dimension of the matrix. For example, a 3-by-3 matrix can be said to be a three-dimensional matrix.
Second, the visual description of
[2]:
Let's take a look at some intuitive content first. The characteristic equation of a matrix is:
The matrix can actually be regarded as a transformation, the left side of the equation is to change the vector x to another position, the right side is the vector x as a stretch, the amount of tension is a lambda. The meaning of this is obvious, and the expression of a property of matrix A is that the matrix can lengthen (or shorten)the vector x by alambda . , that's it.
Any given matrix A, not all vector x it can be elongated (shortened). Any vector that can be elongated (shortened) by matrix A is called the eigenvector of Matrix A (eigenvector), and the amount of elongated (shortened) is the characteristic value (eigenvalue) corresponding to this eigenvector.
It is important to note that the eigenvector we are talking about is a class of vectors, because any eigenvector randomly multiplied by a scalar result must also satisfy the above equation, and of course both vectors can be regarded as the same eigenvectors, and they also correspond to the same eigenvalues.
If the eigenvalues are negative, then the matrix not only lengthens (shortens) the eigenvector, but also reverses the direction of the vector (pointing to the opposite direction). A matrix can lengthen (shorten) multiple vectors, so it may have more than one characteristic value. In addition, for the real symmetric matrix, the characteristic vectors corresponding to the different eigenvalues must be orthogonal .
We can also say that all eigenvectors of a transformation matrix comprise a set of bases for this transformation matrix. The so-called base, which can be understood as the coordinate system axis. Most of our usual use of the Cartesian coordinate system, in linear algebra can be distorted, stretched, rotated, called the base transformation. We can set the base according to the demand, but the base axis must be linearly independent, that is, to ensure that the different axes of the coordinate system do not point to the same direction or can be combined by other axes, otherwise the original space will not rise. In principal component analysis (PCA), we can compress the data and reduce the distortion by setting the base in the direction of the maximum stretch, ignoring some small amount.
It is important that all eigenvectors of a transform matrix be used as the basis of space, because the transformation matrix in these directions can pull up the amount without distorting and selecting it, making the computation much simpler. So the eigenvalues are important, but our ultimate goal is eigenvectors.
Iii. several important abstract concepts 1, nuclear
All the transformed matrices become a set of vectors consisting of 0 vectors, usually represented by Ker (A).
Suppose you are a vector, there is a matrix to transform you, if you unfortunately fall into the nucleus of this matrix, then it is regrettable that after the conversion you become a void of 0. In particular, it is noted that the concept of "transformation" (Transform) is verified, and a similar concept in matrix transformation is called "0 space". Some materials use t to refer to the transformation, when the matrix is connected with a, the matrix is directly regarded as "transformation". The space in which the nucleus is located is defined as v space, which is the original space of all vectors.
2. Domain value
A set of vectors formed by the transformation matrix of all vectors in a space, usually represented by R (A).
Suppose you are a vector, there is a matrix to transform you, and the range of this matrix represents all the possible positions for you in the future. The dimension of the range is also called rank. The space where the range is located is defined as W space.
3. Space
The vector and the multiplication and multiplication operations on it make up the space. Vectors can be transformed (and only in) space. Use the coordinate system (base) to describe the vector in space.
Both nuclear and domain, they are closed. This means that if you and your friend are trapped inside the nucleus, you will not be in the nucleus, either by adding or multiplying, and this will constitute a subspace. The same range of domains.
The mathematician proves that the dimension of V (the space defined by the nucleus as V space) must be equal to the dimension of the kernel of any of its transformation matrices plus the dimension of the domain.
Strict proof can refer to the relevant information, here is an intuitive proof method:
The dimension of V is the number of bases of V. These bases are divided into two parts, one part in the nucleus and the other as the primary image of the non-0 image of the range (which can certainly be divided, because both the nucleus and the range are separate subspace). If any vector in V is written in the form of a base, then the vector must also be part of the kernel, and the other part in the original image of the non-0 image in the range. Now the transformation of this vector, the part of the kernel is of course zero, and the other part of the dimension is just equal to the dimension of the domain.
Four, the transformation matrix row space and 0 space relations
Depending on the nature of the matrix, the number of rows of the transformation matrix equals the dimensions of V, and the rank of the transformation matrix equals the dimension of the range r, so it can be concluded that:
Because the rank of a is another dimension of the A-line space (note that this number is definitely less than the number of rows in the non-full matrix), the above formula can be changed to:
This form is written because we can find that the 0 space of a and the line space of a are orthogonal and complementary. Orthogonal is because 0 space is the kernel, and the line vector by definition multiplied by a is of course zero. The complementarity is because they add up just spanned the entire v space.
This orthogonal complementarity leads to very good properties, as the 0 space of a and the base of the line space of a can be combined to form a base of V.
The relationship between the transformation matrix column space and the left 0 space
If you transpose the above equation, you can get:
Because the actual meaning is to reverse the range and definition domain, so the 0 space is the value of the field outside the value of 0 points in the space of all vectors, it is referred to as "left 0 space" (Null spaces). This will give you:
Similarly, the left 0 space of a and the column space of A are also orthogonal complementary, they add up just can be a w space, their base also formed the base of W.
Relationship between row space and column space of transformation matrices
The transformation matrix actually transforms the target vector from the row space to the column space.
The line space, column space, 0 space, and left 0 space of the matrix make up all the space in the study of linear algebra, and make clear the relationship between them, which is very important for the separate base transformation.
The secret of the characteristic equation
We are trying to construct a transformation matrix A: It transforms the vector into a range space, and the base of the range space is orthogonal; not only that, it is also required to have the form of a base V, which is a known base of the original space. This allows us to transform complex vector problems into an exceptionally simple space.
If the number is not equal to V, then replace A, can become a symmetric and semi-positive matrix, its eigenvector is the required base v!
Again, the matrix is not equal to the transformation, and the matrix as a transformation simply provides a way to understand the transformation matrix. Alternatively, we can assume that the matrix is just a form of transformation.
Reference documents:
[1] Matrix Foundation, http://blog.csdn.net/wangxiaojun911/article/details/4582021
[2] Matrix--eigenvector, http://blog.csdn.net/wangxiaojun911/article/details/6737933
The physical meaning of matrix and its transformation, eigenvalues and eigenvectors