Matrix--Eigenvector (eigenvector)

Source: Internet
Author: User

The basic content of matrices has been mentioned before, and today we look at the important features of matrices-eigenvectors.


Matrix is a very abstract mathematical concept, many people come here often daunting. For example, why the multiplication of matrices has such a strange definition. is actually defined by the actual needs of the project. If only know the concept does not understand the usefulness, thinking is only abstract and not intuitive, it is impossible to feel the subtlety of the matrix.



description of the intuition


Let's start with the intuitive content. The characteristic equation of a matrix is:

A * x = LAMDA * x

What this equation can tell. The last time we mentioned the matrix actually can be regarded as a transformation, the left side of the equation is to change the vector x to another position, the right side is the vector x is a stretch, the amount of stretching is lamda. Then its meaning is obvious, and the expression of a characteristic of matrix A is that the matrix can lengthen (or shorten) the vector x Lamda times, that's all.


Any given matrix A, not all x can be elongated (shortened). Any vector that can be elongated (shortened) by A is called the eigenvector of a (eigenvector), and the elongated (shortened) amount is the characteristic value (eigenvalue) corresponding to this eigenvector.


It is important to note that the eigenvector we are talking about is a class of vectors, because any eigenvector that is arbitrarily multiplied by a scalar result certainly satisfies the above equation, and of course both vectors can be regarded as the same eigenvector, and they all correspond to the same eigenvalue.


If the eigenvalues are negative, it means that the matrix not only lengthens (shortens) the vector, but also points the vector to the opposite direction.


A matrix can lengthen (shorten) several vectors, so it may have many eigenvalues. Interestingly, if a is a real symmetric matrix, then the characteristic vectors corresponding to the different eigenvalues are definitely orthogonal to each other, because (. )。


We can also say that all eigenvectors of a transformation matrix comprise a set of bases for this transformation matrix. The so-called base can be understood as the axis of the coordinate system. Most of our usual use of the Cartesian coordinate system, in linear algebra can be distorted, stretched, rotated, called the base of the transformation. We can set the base according to our needs, but the base of the axis must be linear-independent, that is, to ensure that the different axes of the coordinate system do not point to the same direction or can be a combination of other axes, otherwise the original space will "brace" not up. In PCA (Principal Component analysis), we can greatly compress the data and reduce distortion by setting the base in the largest direction of stretching, ignoring some small amount.


It is important that all eigenvectors of a transform matrix be used as the basis of space, because the transformation matrix in these directions can pull up the amount without distorting and rotating it, making the computation much simpler. So the eigenvalues are important, and our ultimate goal is eigenvectors.



a few important abstract concepts


We go back to the meaning of the matrix and introduce several abstract concepts:


kernel : A set of vectors consisting of 0 vectors, usually represented by Ker (A), after the transformation matrix. If you are a vector, there is a matrix to transform you, if you unfortunately fall in the core of the matrix, then it is regrettable that after the conversion you become a void of 0. In particular, nuclear is the concept of "transformation" (Transform), and a similar concept in matrix transformation is called "0 space". Some materials use t to refer to the transformation, when the matrix is connected with a, the matrix is directly regarded as "transformation". The space in which the nucleus resides is defined as V space, which is the space in which all vectors are originally located.


Range: A set of vectors formed by the transformation matrix of all vectors in a space, usually represented by R (A). Suppose you are a vector, there is a matrix to transform you, and the range of this matrix represents your future position, and you cannot run beyond these positions. The dimension of the range is also called rank. The space where the range is located is defined as W space. The parts of the W space that are not part of the range we'll talk about later.


space : vectors plus Add, multiply operations make up the space. Vectors can (and can only) be transformed in space. Use the coordinate system (base) to describe the vector in space.


Both nuclear and domain, they are closed. It means that if you and your friends are trapped inside the nucleus, you will not run out of the nucleus, either by adding or multiplying. This makes up a subspace. The same range of domains.


The mathematician proves that the dimension of V must be equal to the dimension of the kernel of any of its transformation matrices plus the dimension of the domain.

Dim (V) = Dim (Ker (A)) + Dim (R (a))

Strict proof process can refer to textbooks, here is a visual proof method:

The dimensions of V are the number of bases of V, which are divided into two parts, one part in the nucleus and the other as the primary image of the non-0 image of the range (which can certainly be divided, because both the kernel and the range are independent subspace). If any vector in V is written in the form of a base, then the vector must also be part of the kernel, and the other part in the original image of the non-0 image in the range. Now the transformation of this vector, the part of the kernel is of course zero, and the other part of the dimension is just equal to the dimension of the domain.



transformation matrix row space and 0-space relationship


In addition, according to the nature of the matrix, the number of rows of the transformation matrix is equal to the dimension of V, and the rank of the transformation matrix equals the dimension of the range r, so it can also be recorded as:

The number of rows in a = Dim (0 Space of a) + the rank of a

Because the rank of a is another dimension of the A-line space (note that this number is definitely less than the number of rows in a non-full-rank matrix):

Number of rows in a = Dim (0 Space of a) + Dim (line space of a)


Why write this form? Because from here we can find that the 0 space of a and the line space of a are orthogonal complementary. Orthogonal is because 0 space is the kernel, and the line vector by definition multiplied by a is of course zero. The complementarity is because they add up just spanned the entire v space.


This orthogonal complementarity leads to very good properties, as the 0 space of a and the base of the line space of a can be combined to form a base of V.



the relationship between the transformation matrix column space and the left 0 space


If you transpose the above equation, you can get:

Number of columns in a = Dim (0 Space of A^t) + Dim (column space of a)

Because the practical meaning of the a^t is to reverse the range and definition domain, so the space of the A^t 0 is the space from the outside of the domain to all vectors of the V 0 points (a bit of a mouthful!), someone called it "left 0 space" (leftnull spaces). Such

Number of columns in a = Dim (left 0 space of a) + Dim (column space of a)


The left 0 space of a is also orthogonal to the column space of a, and they add up to just one w space. Their bases also form the base of W.



relationship between row space and column space of transformation matrices


Don't forget that the transformation matrix actually transforms the target vector from the line space into the column space.


The line space of the matrix, the column space, 0 space, and the left 0 space make up all the space in the study of linear algebra, and make clear the relationship between them, which is very important for the transformation of the basis of the distinction.



The secret of the characteristic equation


We are trying to construct a transformation matrix A: It transforms the vector into a range space, and the base of the range space is orthogonal; not only that, but also any one of the base v has a * u = lamda * v form, U is a known base of the original space. This allows us to transform complex vector problems into an exceptionally simple space.


If the number of U is not equal to V, then replace a with a^t*a, can become a symmetric and semi-positive matrix, its eigenvector is the required base v.


Again, the matrix is not equal to the transformation, and the matrix as a transformation simply provides a way to understand the transformation matrix. Or, a matrix is just one form of transformation.




Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.