The past and present of Sparse Coding (3)

Source: Internet
Author: User

Difference between Sparse Coding and low rank matrix


The last two summaries describe the interpretation of Sparse Coding in life sciences, and provide some prototypes of sparse encoding models (such as lasso, today, we are entering the field of machine learning for sparse expression. Since Sparse Coding entered the machine learning field, many applications have emerged, such as image de-noise, de-blur, object detection, target recognition, and Recommendation Systems in the Internet field) in fact, sparsecoding is a little different from low rank. The former is a sparse coefficient, and the latter is a sparse base. They are called sparse expressions. Next, let's briefly describe sparse.
Difference between coding and low rank:

Sparsecoding is centered on the development of work, such as finding the sparse dictionary D and the Sparse Coefficient Alpha. If X is a face image, the purpose of training is to find a suitable dictionary and a linear combination of some coefficients to reconstruct face X. Assume that the training sample has K classes (K individuals) and each class has n samples, then D can be expressed:

The subscript I indicates the class I.


However, we still need to emphasize that D represents an image vector, assuming that the dimension is M. In this way, a sample can be expressed:

, Where

The relationship with Sparse Coding is finally reached. Alpha is sparse, with a lot of zeros. If there is still some noise in the image, you can extend it:

At this point, the Sparse Coding Model of machine learning is basically set up, and the next thing we need to do is to solve it. What we have to say is that the formula is unknown and it looks like there are many solutions, however, another constraint is useless, that is, to minimize the number of non-zero values of alpha and E as sparse as possible. Therefore, the final model (Formula 1) is established:




(Formula 1)

The solution can be optimized. Common methods include coordinate descent and orthogonal Matching Pursuit (OMP). The alpha and E solutions are sparse, but the solution focuses on optimization, there are an endless stream of optimization algorithms.

 

Now let's take a look at low rank. First, let's look at a classic problem, as shown in Figure 1:


(Figure 1)

(Figure 1) Andrew Ng, a professor of Stanford artificial intelligence, describes how four people rate different movies. Alice and Bob prefer romantic movies that love romance, carol and Dave prefer action movies and martial arts films. we can infer that Alice and Bob are female, while Carol and Dave may be male. This rule can also be roughly seen in matrix Y. The data in the first two rows of the matrix (which is divided by male students) are very similar and can be seen as the same axis, the following two rows are considered as another axis, which forms the scoring space. Therefore, the axis is the basis, similar to the sparse encoding dictionary. However, the base is sparse, while the coefficient is not sparse. In fact, there are a lot of similar data in reality. For example, if you discharge the same person-face data in alignment according to such a matrix, it is also a similar rule. Students may be looking for a coefficient, as shown in figure 2:


(Figure 2)

The optimization solution model is shown in (formula 2:



(Formula 2)

Yes, just like above, X and theta are required. Coordinate descent can still be used for the solution, but there are other solutions. Other methods are mainly used for convex optimization, tao zhexuan, a rising star in the field of mathematics, proves that the optimization of the l0 norm (that is, the number of computing elements) under rip conditions has the same solution as the optimization of the l1 norm. Then, the l1 norm is a convex optimization problem, by the way, trace norm is used to solve the problem. There are a lot of mathematical proofs behind it. Today we will talk about this. The optimization solution will be easy to understand later.


For applications with sparse representation, you can refer to the previous post titled "sparse expression ".


Reprinted please indicate the link: http://blog.csdn.net/cuoqu/article/details/9040731


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.