Singular value decomposition (SVD)

Source: Internet
Author: User

Recently in the study of the recommendation algorithm knowledge, the SVD indefinitely, in the Chinaunix forum found a good explanation of the SVD example, so hurriedly posted out, and you share: http://blog.chinaunix.net/uid-20761674-id-4040274.html

Original: We recommend a singular value decomposition

Some knowledge about the part of linear transformations can be poked here. Singular value decomposition (SVD)---linear transformation geometric meaning

singular value decomposition (the singular value decomposition)

This part is to understand the two-dimensional SVD from the geometrical level: for any 2 x 2 matrix, the SVD can transform a mutually perpendicular mesh (orthogonal grid) into another perpendicular mesh.

We can describe this fact by means of vectors: first, select two mutually orthogonal unit vectors v1 and v2, vector mv1 and Mv2 orthogonal.

U1 and U2 represent unit vectors for mv1 and Mv2 respectively, σ1 * u1 = Mv1 and σ2 * u2 = M V2. Σ1 and σ 2 respectively represent the modulus on this different direction vector, also known as the singular value of the matrix M.

So we have the following relationship

Mv1 =σ1U1
Mv2 =σ2U2

We can now briefly describe the expression of vector x after M-linear transformation. Since vectors v1 and v2 are orthogonal unit vectors, we can get the following equation:

x = (v1x) v1 + (v2x) v2

This means that:

Mx = (v1x) mv1 + (v2x) mv2
Mx = (v1x) σ1U1 + (v2x) σ2U2

The inner product of a vector can be represented by the transpose of the vector, as shown below

v x = vTx

The final formula is

Mx = u1σ1 v1tx + u2σ2 v2tx
M = U1σ1 v1t + U2σ2 v2t

These formulas are often expressed as

M = UΣVT

The column vectors of the u Matrix are U1,U2 , Σ is a diagonal matrix, the diagonal elements are corresponding σ1 and σ2 respectively, and the column vectors of theV matrix are v1andv2respectively. The upper corner mark T denotes the transpose of the matrix V .

This means that any matrix M can be decomposed into three matrices. v represents the standard orthogonal base of the original domain,u represents the standard orthogonal base of the M-transformed Co-domain, and Σ represents the relationship between the vector in V and the corresponding vector in u . (V describes an orthonormal basis in the domain, and U describes a orthonormal basis in the Co-domain, Andσdescribes Ho W much the vectors in V is stretched to give the vectors in U.)

how to get singular value decomposition? (How do we find the singular decomposition?)

In fact, we can find the singular value decomposition of any matrix, so how do we do it? Suppose you have a unit circle in the original field, as shown in. After M-matrix transformation, the unit circle in the Co-domain becomes an ellipse, its long axis (mv1) and the short axis (Mv2) correspond to the converted two standard orthogonal vectors, and also the longest and shortest two vectors within the ellipse range.

In other words, a function defined on a unit circle | Mx| The maximum and minimum values are obtained in the direction of v1 and v2 respectively. So we narrowed the singular value decomposition of the search matrix to the optimization function | Mx| up. The result is found (the specific push-to-process is not described in detail here) This function obtains the optimal value of the vector is the matrix MT M of the eigenvectors respectively. Since MTM is a symmetric matrix, the eigenvectors corresponding to the different eigenvalues are orthogonal to each other, and we use VI to represent all the eigenvectors of the MTM. Singular Value Ōi = | MVI| , the vector UI is a unit vector in the Mvi direction. But why is the UI orthogonal?

Tear down as follows:

Ōi and Σj are different two singular values respectively.

MVI =σiUI
MVJ =σjuj.

Let's first look at the MvimVJand assume that they correspond to singular values that are not zero. On the one hand the value of this expression is 0, pushed to the following

Mvi mVJ = viTMT MVJ = vi MTMVJ =λjVI VJ = 0

On the other hand, we have

Mvi mVJ =σiσj UI Uj = 0

Therefore, theUI and Uj are orthogonal. In practice, however, this is not a method of solving singular values, and the efficiency is very low. This is not the main discussion of how to solve the singular value, in order to demonstrate convenience, the use of the second matrix.

Application example (another example)

Now let's look at a few examples.

Instance One

After this matrix transformation, the effect is as shown

In this example, the second singular value is 0, so there is only one expression in the direction after the transformation.

M = U1σ1 v1T.

In other words, if some singular values are very small, the corresponding items can appear differently in the decomposition of matrix M. Therefore, we can see that the size of the rank of the matrix M equals the number of non-0 singular values.

Example Two

Let's take a look at the application of singular value decomposition in data representation. Suppose we have the following image data of 25 x.

, the image is mainly composed of the following three parts.

We represent the image as a matrix of x 25, the elements of the matrix correspond to the different pixels of the image, and if the pixels are white, take 1, and the black takes 0. We've got a matrix with 375 elements, as shown in

If we decompose the singular value of matrix m, the singular values are

σ1 = 14.72
σ2 = 5.22
Σ3 = 3.31

The matrix M can be expressed as

m=u1σ1 v1T + u2σ2 v2T + u3σ3 v3T

VI has 15 elements, theUI has 25 elements, and the ōi corresponds to different singular values. As shown, we can use 123 elements to represent the image data with 375 elements.

Example Three

Noise Reduction (Noise reduction)

The singular values of the previous examples are not zero, or they are relatively large, let's explore the case of having 0 or very small singular values. In general, the larger singular value corresponds to the part that contains more information. For example, we have a scanned, noisy image, as shown in

We process the scanned image in the same way as the instance two. Get the singular value of the image matrix:

σ1 = 14.15
σ2 = 4.67
σ3 = 3.00
Σ4 = 0.21
Σ5 = 0.19
...
σ15 = 0.05

Obviously, the first three singular values are much larger than the singular values that follow, so the decomposition of the matrix M can be as follows:

M u1σ1 v1T + u2σ2 v2T + u3σ3 v3T

After the singular value decomposition, we get a noise reduction image.

Example Four

Data analysis

There is always noise in the data we collect: No matter how sophisticated the device is, how good the method is, there will always be some error. If you remember the above, the large singular value corresponds to the main information in the matrix, it is quite reasonable to use SVD to analyze the data and extract the main parts.

As an example, if the data we collect is as follows:

We represent the data in the form of a matrix:

After the singular value decomposition, we get

σ1 = 6.04
σ2 = 0.22

Since the first singular value is much larger than the second one, the data contains some noise, and the second singular value can be omitted in the corresponding part of the original matrix decomposition. After SVD decomposition, the main sample points are retained

As far as preserving the main sample data, this process has some connection with PCA (principal component analysis) technology, and PCA uses SVD to detect inter-data dependencies and redundant information.

Summary (Summary)

This article is very clear about the meaning of SVD, not only from the point of view of mathematics, but also linked to several examples of application of the image of SVD is how to find the main information in the data. In Netflix prize, many teams use matrix decomposition technology, which comes from the decomposition idea of SVD, which is the distortion of SVD, but the idea is consistent. Before being able to use the matrix decomposition technology in the personalized recommendation system, but the understanding is not intuitive, read the original clairvoyant, I want to be able to find the main information in the data of the idea, on several aspects to think about how to use the potential relationship in the data to explore the personalized recommendation system. Also hope to pass by the heroes to share it.

References:

Gilbert Strang, Linear Algebraand its applications. Brooks Cole

William H. Press et al, numercial Recipes in c:the Art of scientific Computing. Cambridge University Press.

Dan Kalman, a singularly valuable decomposition:the SVD of a Matrix, the College mathematics Journal (1996), 2-23.

If you liked this, your ' re sure to love that, the New York Times, November 21, 2008.


Http://blog.sciencenet.cn/blog-696950-699432.html

Singular value decomposition (SVD)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.