Eigenvalue decomposition and the meaning of singular value decomposition

Source: Internet
Author: User
Keywords Eigenvalue decomposition eigenvector singular value decomposition matrix multiplication
Tags .mall change data direction example information it is multiplication

Eigenvalue decomposition
1. Matrix multiplication

Before introducing the geometrical meaning of eigenvalue and eigenvector, the geometric meaning of matrix multiplication is introduced.

Matrix multiplication corresponds to a transformation, a new vector that transforms any vector into another direction or length. In this process, the original vector mainly occurs in rotation and expansion of the change. If the matrix only has a scaling transformation for some vectors and does not produce a rotational effect, then these vectors are called the eigenvector of the matrix, and the scale of the scaling is the characteristic value.

For example: the corresponding linear transformation is the following form:

Because the matrix M is symmetric, this transformation is an extension of the X, Y axis. "When the element value in M is greater than 1 o'clock, it is stretched;

So if the matrix M is not symmetric, for example:,
The transformations described here are shown in the following illustration:

This is actually a stretch transformation of an axis on a plane "as shown by a blue arrow," in which the blue arrow is the main direction of change. There may be more than one direction of change, but if we want to describe a transformation, then we can describe the main direction of change in this transformation.

2, eigenvalue decomposition and eigenvector

If a vector p is a eigenvector of a matrix A, it will certainly be represented as the characteristic value of AP =λp,λ as the eigenvector p.
Eigenvalue decomposition is to decompose a matrix into

where q is a matrix of the eigenvectors of this matrix A, σ is a diagonal matrix, each diagonal element is a characteristic value, the characteristics of which are arranged from large to small, the eigenvalues of the corresponding eigenvector is to describe the direction of the matrix change (from the main change to the secondary change arrangement). In other words, the information of matrix A can be represented by its eigenvalues and eigenvectors.

For a high dimensional matrix, the matrix is a linear transformation in a high dimensional space. We can imagine that this transformation also has a lot of transformation direction, we through the eigenvalue decomposition of the first n eigenvector, then the matrix is the most important n direction of change. We can approximate this matrix (transformation) by using the first n direction of change.

To sum up, eigenvalue decomposition can get eigenvalues and eigenvectors, and eigenvalues indicate how important this feature is, and what the eigenvector represents. However, eigenvalue decomposition also has a number of limitations, such as the transformation of the matrix must be square.

In machine learning feature extraction, meaning is the maximum eigenvalue corresponding to the direction of the eigenvector contains the most information, if a few eigenvalues are small, indicating that these several directional information is very small, can be used to reduce the dimension, that is, to delete the corresponding direction of small eigenvalues of data, only to retain the direction of large eigenvalues of data, After doing so, the amount of data decreases, but the useful information changes little, and the PCA dimensionality reduction is based on this idea.

The eigenvalues and eigenvector matrices can be obtained by Eig function in MATLAB.
As:

>>a = 1 8 5 7 (4) 6 3 2 >> [v D] = Eig (A) V =-9 0.0976-0.63. 30 0.6780-0.2619-0.4472 0.3525 0.5895 0.3223-0.1732-0.4472 0.5501-0.3915-0.5501 0.3915-0.4472-0.3223 0.1732-0.3525- 0.5895-0.4472-0.6780 0.2619-0.0976 0.6330 D = 65.0000 0 0 0 0 0-21.2768 0 0 0 0 0-13.1263 0 0 0 0 0 21.2768 0 0 0 0 0 13.1263

The D diagonal element is the characteristic value (represents the scale of scaling), D is the σ,v in the eigenvalue decomposition formula, each column corresponds to each column of D, representing the corresponding eigenvector, that is, the Q in the eigenvalue decomposition.

Two, singular value decomposition

1. Singular value

Eigenvalue decomposition is a good way to extract matrix features, but it is only for the phalanx, in the real world, we see most of the matrix is not a phalanx, for example, there are n students, each student has m scores, so the formation of a n * M matrix can not be a phalanx, How can we describe the important features of such a common matrix? Singular value decomposition can be used to do this thing, singular value decomposition is a can be applied to any matrix of a decomposition method:
Decomposition form:
Assuming that a is a matrix of M * N, the resulting u is a square of M * m (called the left singular vector), Σ is a matrix of M * N (except that the elements of the diagonal are 0, the elements on the diagonal are called singular values), and VT (V's transpose) is a matrix of n * N (called the right singular vector).

2, singular value and eigenvalue

So how do singular values and eigenvalues correspond? We multiply the transpose of a matrix A by a and find the eigenvalues for (ATA) in the following form:

The σ here is the singular value, and u is the left singular vector as mentioned above.

The singular value σ is similar to the eigenvalue, in the matrix σ is also from the big to the small arrangement, and σ 's reduction is particularly fast, in many cases, the first 10% or even 1% of the singular value of the sum of the sum of the total singular value of more than 99%. In other words, we can approximate the matrix by using singular values of the former R (r far less than M, N), that is, partial singular value decomposition:

The result of multiplying the three matrices on the right will be a matrix close to a, the closer to N, the closer the result is to a.

The advantages of singular value decomposition relative to eigenvalue decomposition:
① decomposed matrices can be arbitrary matrices
② when the signal is restored, the left and right singular values can be selected in different dimensions
Another noteworthy point is that, whether it is singular value decomposition or eigenvalue decomposition, the decomposed eigenvectors are orthogonal.

The relation between singular value and PCA, Mathematics in Machine learning (5)-powerful matrix singular value decomposition (SVD) and its applications give a good explanation.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.