Basic knowledge of eigenvalues and singular values

Source: Internet
Author: User
Tags modulus

In the process of reading the paper, often encounter the characteristics of the value, eigenvector, singular value, right singular vector and other relevant knowledge of the place, each time is to see indefinitely. This paper starts with the basic knowledge of eigenvalue and singular value, explores the connotation of singular value and eigenvalue, and then combs the characteristic value and singular knowledge.

Eigenvalue decomposition and singular value decomposition (SVD) are widely used in the field of principal component Analysis (PCA) and machine learning. The implementation of PCA consists of two methods, one is eigenvalue decomposition, the other is singular value decomposition, and the purpose of eigenvalue decomposition and singular value decomposition is the same, which is the most important characteristic of extracting a matrix. Eigenvalues and singular values in most people's impressions are just stuck in linear algebra, and they are very familiar with their special physical meaning and application background. In fact, eigenvalues and singular values have their specific physical meaning, such as singular value decomposition can be a complex matrix decomposition into several small and simple matrix multiplication, these small matrices contain the matrix of important characteristics (singular value, left and right singular value vector, etc.). Someone in blog post explained the singular value decomposition with the example of "describe a person": "Describe a person who says this person is bushy, square face, beard, and with a black frame of glasses, so a few characteristics, let others mind inside there is a more clear understanding, in fact, the characteristics of human face has countless kinds of, This can be described because people are born with a very good ability to extract important features. "One of the characteristics of the human face has countless kinds of" constitutes the matrix to be decomposed, "a few features" represents the decomposition of a few small matrices, "human natural extraction ability" is a singular value decomposition (SVD) process. Having said so much about the physical meaning of singular value decomposition, we will then introduce the relevant knowledge of eigenvalues and singular values in turn.

 1. Characteristic value

The result of

A matrix (square a) multiplied by a vector (X) is still a vector of the same dimension. So matrix multiplication (AX) corresponds to a table swap (rotate, pan, zoom), which turns a vector into another vector of the same dimension. For eigenvalues and eigenvectors, we may wish to recall the definition of eigenvalues and eigenvectors in linear algebra: set A is an n-order phalanx, if present  λ   and n-dimensional nonzero vector x, so   , then  λ  is called a eigenvalue of square A, x is square a corresponds to or belongs to eigenvalues   λ .

  All eigenvectors of a transformed phalanx form a set of bases for this transformation matrix. The so-called base, which can be understood as the coordinate system axis. Most of our usual use of the Cartesian coordinate system, in linear algebra can be distorted, stretched, rotated, called the base transformation. We can set the base according to the demand, but the base axis must be linearly independent, that is, to ensure that the different axes of the coordinate system do not point to the same direction or can be combined by other axes, otherwise the original space will not rise. From the point of view of linear space, in a linear space that defines the inner product, the characteristic decomposition of an n-order symmetric square is to produce n standard orthogonal bases of the space, and then to project the matrix onto the N bases. n eigenvectors are n standard orthogonal bases, while the modulus of eigenvalues represents the projection length of a matrix on each base. The larger the eigenvalues, the greater the variance of the matrix on the corresponding eigenvector, the greater the power and the more information. To sum up, eigenvalue decomposition can get eigenvalues and eigenvectors, the eigenvalues represent how important this feature is, and the feature vectors represent what this feature is , and each eigenvector can be understood as a linear subspace, and we can take advantage of these linear subspace to do a lot of things. However, eigenvalue decomposition also has a lot of limitations, such as the transformation of the matrix must be a square.

In machine learning feature extraction, the meaning is that the maximum eigenvalue corresponding to the direction of the eigenvector contains the most information, if a certain number of characteristics are small, indicating that the information is very small, can be used to reduce the dimension, that is, the deletion of small eigenvalues corresponding to the direction of the data, only the large eigenvalues of the direction corresponding to the data, After this, the data volume decreases, but the useful information changes little, and the PCA dimensionality reduction is based on this idea.

2. Decomposition of eigenvalue value 

Set A has n eigenvalues and eigenvectors, then:

Write the above into a matrix form:

if (x1,x2,..., xn) reversible, then the left and right sides are inverse, then the square a can be directly through the eigenvalues and eigenvectors of the unique expression, so that

q= (x1,x2,..., xn)

Σ?=?diag (λ1,?λ2,?...,? λn)

Then, the expression is called the eigenvalue decomposition of the square matrix, so that the matrix A is represented by eigenvalues and eigenvectors only.

3. Singular value

Eigenvalue and eigenvalue decomposition are for the square, in the real world, we see most of the matrix is not a square, such as every data has m points, a total of the acquisition of N-channel data, so that the formation of a n*m matrix, then how to be like a square to extract its characteristics, as well as the importance of characteristics. Singular value decomposition is to do this thing. Singular value is equivalent to the eigenvalue in a square matrix, and singular value decomposition is equivalent to the eigenvalue decomposition in a square matrix.

4. Singular value decomposition (SVD)

Singular value decomposition (SVD) is a decomposition method applicable to arbitrary matrices. In order to understand the singular value decomposition, we first illustrate the two-dimensional singular value decomposition from the geometrical level: for arbitrary 2*2 matrices, one perpendicular mesh can be transformed into another perpendicular mesh by SVD. This fact can be described by means of vectors: first, select two mutually orthogonal unit vectors v1,v2, vector mv1,mv2 orthogonal (e.g. 1) . U1, U2(such as 2) represent the unit vectors of Mv1,mv2 , respectively. σ1 * u1 = mv1 and σ2 * u2 = mv2. Σ1 and σ 2 respectively represent the modulus on this different direction vector, also known as the singular value of the matrix M.

Fig. 1 Fig. 2

So we have a relationship:mv1 =σ1U1,mv2 =σ2U2.

Vector x can be represented before transformation:x = (v1x)v1 + (v2x)v2

After M transform:mx = (v1x)mv1 + (v2x )Mv2
M x = (v1x) σ1U1 + (v2x) σ2U2

     Where: The inner product of a vector can be represented by a transpose of a vector,vx = vTx = |v| | x|cosθ

      So the above equation can be converted to:Mx = u1σ1 v1tx + u2σ2 v2tx

M =U1σ1 v1t + U2σ2 v2t

The above formulas are often expressed as:M =UΣVT

   The column vectors of the U matrix (left singular matrix) are U1,U2; σ is a diagonal matrix, and the diagonal elements are corresponding σ1 and σ2 respectively;V The column vectors of the matrix (right singular matrix) are v1,v2, respectively. The upper corner Mark T denotes the transpose of the matrix V . This means that any matrix M can be decomposed into three matrices. v represents the standard orthogonal base of the original domain,U represents the standard orthogonal base of the M-transformed Co-domain, and σ represents the relationship between the vector in V and the corresponding vector in u . The size relationship of several multiplication matrices is reflected from Figure 3.

Figure 3

5. Relationship between eigenvalues and singular values 

  Since eigenvalues and singular values each describe the importance of the characteristics of the matrix (eigenvectors and singular values), there must be some relationship. General matrix A, which multiplies a with its transpose, ATA will get a square, and the eigenvalues of the Square ata (ATAvi=λivi) can be obtained. Here the resulting eigenvector vi above the right singular vector, all eigenvectors together constitute the right singular matrix. In addition we can also get singular values and left singular vectors (matrices):

Σ here is the singular value mentioned above, and U is the left singular vector mentioned above. Singular value σ is similar to the eigenvalues, in the matrix σ is also from the large to the small arrangement, and σ reduction is particularly fast, in many cases, the first 10% or even 1% of the singular value of the sum of the total singular value of more than 99% . In other words, we can also approximate the matrix with the singular value of the former R large, which defines the partial singular value decomposition :

R is a number that is much smaller than M, N, so that the multiplication of the matrix looks like this:

The result of multiplying the three matrices on the right will be a matrix close to a, where R is closer to N, and the result of multiplying is closer to a. And the area of the three matrices (in the storage point of view, the smaller the size of the matrix, the less storage) is much smaller than the original matrix A, if we want to compress space to represent the original matrix A, we save the three matrix here: U, Σ, V is good.

Reference:

Singular value decomposition and geometric significance: http://blog.csdn.net/redline2005/article/details/24100293

Mathematics in Machine learning (5)-powerful matrix singular value decomposition (SVD) and its application:http://www.cnblogs.com/LeftNotEasy/archive/2011/01/19/svd-and-applications.html

Linear discriminant Analysis (LDA), principal component Analysis (PCA): http://www.cnblogs.com/LeftNotEasy/archive/2011/01/08/lda-and-pca-machine-learning.html

Basic knowledge of eigenvalues and singular values

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.