How to Easily kill SvD (matrix singular value decomposition) and speak in code

Source: Internet
Author: User

SVD I know the most ridiculous thing about my machine learning. Nima. Frankly speaking, a lot of machine learning books are always in the high-end of the svd+, and then saw the Netflix film recommendation Contest, Wow, the champions team is to do with the. And then mercilessly downloaded all their papers, simply did not understand. Then there was a sense of fear of SVD. I feel this thing is so high-end. You see him ah, it can improve the accuracy of prediction, it seems to be omnipotent, can reduce dimensions, what the game is nothing to pull the SVD. Later see Kaggle on the game, there is a Walmat storage Volume forecast contest, is also the data first using SVD pretreatment.

Go back to download a lot of SVD papers to see, have been engaged for a long time did not understand. They all say how they use SVD. But in the end SVD is a what thing, speak of iffy.

He decomposed a large matrix into some small matrices, as the blog says: http://blog.csdn.net/wangran51/article/details/7408414

::::-------------------------Split Line----------I don't like-------------------------------------------

Singular value decomposition

The decomposition of the square is discussed above, but in the LSA we are going to decompose the term-document matrix, it is obvious that this matrix is not a square. Singular value decomposition is required to decompose the term-document. The inference of singular value decomposition uses the decomposition of the square matrix described above.

Suppose C is a m x n matrix, U is a m x m matrix, where U is listed asthe orthogonal eigenvector of CC T, V is n x n matrix, where V is listed asthe orthogonal eigenvector of C TC, and then if R is the rank of C matrix, there is singular value decomposition:

t and CTC are the same, which is

The rest of the position value is 0,

ΣI is called the singular value of matrix C.

Multiply C by its transpose matrix CT:

The above formula is the decomposition of the symmetric matrix discussed in the previous section.

A graphical representation of singular value decomposition:

You can see that σ is a m x n matrix, but from line n+1 to M line is all zero, so it can be represented as n x n matrix, and because the right type is a matrix, so u can be represented as M x n matrix, VT can be represented as N x n Matrix

-------------------------Split Line----------I've been looking for a lot of information, and that's what it says. ---------------------------------------------

I look at these things, I always feel good high-end ah, a matrix he just decomposed him into so many small matrices, and also ensure that the product of these small matrices is equal to the original matrix. It's amazing. This is what I do with the code, how it is implemented.

And then find a lot of blogs, all in the blind chicken pull SVD can put in the recommendation system, U matrix is the user's characteristics, v matrix is the characteristics of the film, good high-end, but also can be used in natural language processing.

And then the egg. Confused.

I want to tell you. SVD is a thing that any college undergraduate will do, do you know what he did?

I'll give you an easy-to-follow, followed by a python version of the code:

When a matrix is a square, he can do feature decomposition. That's the old test thing in linear algebra. Take a square and you'll find the eigenvalues eigenvector, and when he has n linearly unrelated eigenvectors, the square can be similar diagonally to a diagonal matrix if

His linearly independent eigenvector is less than N. He can only be normalized to the approximate block (Jadom block).

So, here's the question, what is SVD?

When a matrix A is not a square, you cannot diagonal it, but:

The transpose eigenvalues of a and a feature vectors are the same. That's all there is to play with. I asked for A*a's transpose first. Assuming that the matrix is W, it is definitely a square. Well, I'm going to ask for the eigenvector of the square and then do the feature decomposition on it and turn it into a diagonal matrix.

Then the eigenvalue of a is actually the S eigenvalue open radical. Of course. | A| is equal to the number of eigenvalues multiplied, | S| is | The square of the a|. The transpose feature vectors for a and a are the same. Then we find out the diagonal matrix in SVD decomposition.

Next, write the user attribute eigenvectors and movie attribute eigenvectors. In fact, it is to find the eigenvector of S, and then do dot matrix multiplication on it. Leave some suspense, see the code:

How to Easily kill SvD (matrix singular value decomposition) and speak in code

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.