Matrix-feature vector)

Source: Internet
Author: User

Original article link

The basic content of the matrix has been mentioned before. Today, let's take a look at the important feature of the matrix-feature vectors.

Matrix is a very abstract mathematical concept, and many people are often daunting here. For example, why is there such a strange definition of matrix multiplication? It is actually defined by the actual needs of the project. If you only know what the concept is useful for, thinking is only abstract and not intuitive. It is impossible to feel the subtlety of the matrix.

Intuitive description 

Let's first look at the intuitive content. The feature equation of the matrix is:

A * x = Lamda * x

What can we see from this equation? The matrix we mentioned last time can actually be seen as a transformation. On the left side of the equation, the vector X is changed to another position. On the right side, the vector X is stretched, and the tensile volume is Lamda. The meaning is obvious. One of the characteristics of matrix A is that this matrix can lengthen (or shorten) the Lamda times of vector X.

A given matrix A does not lengthen (shorten) All X ). All vectors that can be extended (shortened) by a are called the feature vector of A. The extended (shortened) amount is the feature value corresponding to this feature vector ).

It is worth noting that the feature vector we are talking about is a class vector, because any feature vector multiplied by a scalar results certainly satisfy the above equation, of course, both vectors can be seen as the same feature vector, and both correspond to the same feature value.

If the feature value is negative, it means that the matrix not only extends (shortens) the vector, but also points the vector in the opposite direction.

A matrix may be able to lengthen (shorten) Several vectors, so it may have many feature values. Interestingly, if A is a real symmetric matrix, the feature vectors corresponding to different feature values must be orthogonal to each other, because (?).

We can also say that all feature vectors of a transformation matrix constitute a group of bases of this transformation matrix. The base can be understood as the axis of the coordinate system. Most of our common uses are Cartesian coordinate systems. In online algebra, We can distort, stretch, and rotate the coordinate systems, which are called base transformations. We can set the base as needed, but the axes of the base must be linear-independent, that is to say, ensure that different axes of the coordinate system do not point to the same direction or can be combined by other axes. Otherwise, the original space will not be supported. In principal component analysis, we set the base by stretching the maximum direction, ignoring a small amount, which can greatly compress data and reduce distortion.

All feature vectors of the transformation matrix are important as the basis of space because the transformation matrix can stretch the vector without twisting or rotating it in these directions, making the calculation very simple. Therefore, feature values are important. Our ultimate goal is feature vectors.

Several important abstract concepts 

Let's go back to the matrix and introduce several abstract concepts:

Core: A set of vectors that are converted into zero vectors after transformation matrix. It is usually represented by Ker (. Assume that you are a vector and there is a matrix to transform you. If you unfortunately fall into the core of this matrix, you will become nothing after conversion. In particular, the core is a concept in transform, and a similar concept in matrix transformation is called "zero space ". Some materials are represented by t when talking about transformations, and A is used only when the matrix is connected. In this paper, the matrix is directly considered as a "transformation ". The space of the core is defined as the V space, that is, the original space of all vectors.

Value Range: A set of vectors formed by transformation matrices of all vectors in a space, usually expressed by R (. Assume that you are a vector and there is a matrix to transform you. The value field of this matrix represents your possible locations in the future, and you cannot go beyond these locations. The dimension of the value range is also called rank ). The value range space is defined as W. The part of the W space that does not belong to the value field will be discussed later.

Space: Vector Addition and multiplication constitute space. Vectors can (or can only) be transformed in space. Use the coordinate system (base) to describe the vector in space.

Both core and value fields are closed. It means that if you and your friends are stuck in the core, you will still be in the core, no matter whether you add or multiply them. This constitutes a subspace. The value range is the same.

Mathematicians have proved that the dimension of V must be equal to the dimension of the kernel of any of its transformation matrices plus the dimension of the value range.

Dim (v) = dim (Ker (A) + dim (R ())

For the strict proof process, refer to the textbook. Here is an intuitive proof:

The dimension of V is the number of bases of V. These bases are divided into two parts: one is in the kernel, and the other is the original image of a non-zero image in the value range (it can certainly be divided, because the core and value fields are independent sub-spaces ). If any vector in V is written in the form of a base, this vector must be part of the kernel, and the other part must be in the original image of a non-zero image in the Value Field. Now this vector is transformed, and the part of the kernel is of course zero, and the dimension of the other part is just equal to the dimension of the value range.

Transform the relationship between matrix row space and zero space

In addition, based on the nature of the matrix, the number of rows in the transformation matrix is equal to the dimension of V, and the rank of the transformation matrix is equal to the dimension of the value range R, so it can also be recorded:

Number of rows of a = dim (zero space of a) + rank of

Because the rank of a is the dimension of row a space (note that this number must be smaller than the number of rows in a non-full rank matrix ):

Number of rows of a = dim (zero space of a) + dim (row space of)

Why is it written in this form? From this we can find that the zero space of a and the row space of a are orthogonal. Orthogonal is because zero space is the core, and the row vector multiplied by a by definition is zero. They complement each other because they add up to the entire v space.

This orthogonal complementarity leads to a very good property, because the combination of the zero space of a and the row space of a can just make up the base of v.

Transform the relationship between the matrix column space and the left zero space 

If you transpose the above equation, you can:

Number of columns of a = dim (zero space of a ^ t) + dim (column space of)

Because the practical significance of a ^ t is to reverse the value range and the defined domain, so the zero space of a ^ t is the space from the area outside the value range to all vectors at zero point in V (a bit of mouth !), Someone calls it "Left zero space" (leftnull space ). In this way:

Number of columns of a = dim (left zero space of a) + dim (column space of)

Similarly, the left zero space of A is orthogonal to the column space of A, and they can be scaled up to W space. Their basis also forms the basis of W.

Transform the relationship between matrix row space and column space 

Do not forget that the transformation matrix actually converts the target vector from the row space to the column space.

The row space, column space, zero space, and left zero space of the matrix constitute all spaces in our online algebra research. It is very important to clarify the relationship between them for the transformation of the base.

Feature equation secret 

We try to construct such a transformation matrix A: It transforms the vector into a value space, and the base of this value space is orthogonal; not only that, it is also required that any base V be in the form of a * u = Lamda * v. U is a known base of the original space. In this way, we can transform Complicated Vector problems into an unusually simple space.

If the number of U is not equal to V, replacing a with a ^ t * A can become a symmetric and semi-Definite Matrix. Its feature vectors are exactly the required Base V!

Again, the matrix is not equal to the transformation. Seeing the matrix as a transformation only provides a way to understand the transformation matrix. Or matrix is only one form of transformation.

Matrix-feature vector)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.