MP algorithm and OMP algorithm and its thought

Source: Internet
Author: User

The main introduction is the MP (Matching pursuits) algorithm and the OMP (orthogonal Matching Pursuit) algorithm [1], the two algorithms, although presented in the early 90, but as a classic algorithm, domestic literature (may have I did not search to) Only describes the algorithm steps and simple application, not detailed analysis of it, foreign literature or analysis is very thorough, so I combine their understanding, to analyze the blog, counted as notes.

1. Sparse representation of the signal (sparse representation of signals)

Given an over-complete dictionary matrix, each of its columns represents an atom of a prototype signal. Given a signal y, it can be represented as a sparse linear combination of these atoms. The signal Y can be expressed as Y = Dx, or. The so-called completeness of the dictionary matrix refers to the number of atoms far greater than the length of the signal y (its length is very obviously N), that is, n<<k.

2.MP algorithm (matching tracking algorithm)

2.1 Algorithm descriptive narration

As one of the methods of sparse decomposition of the signal, the signal is decomposed on the complete dictionary library.

Assume that the signal being represented is Y and its length is n. Assuming h represents Hilbert space, in this space H, a set of vectors form the dictionary matrix D, in which each vector can be called an atom (atom), whose length is the same as the length n of the signal y, and these vectors have been normalized, i.e. |, that is, the unit vector length is 1. The basic idea of the MP algorithm: from the dictionary Matrix D (also known as the Complete Atomic Library), select an atom that best matches the signal Y (that is, a column), construct a sparse approximation, and find the signal residuals, and then continue to select the most matched with the signal residuals of the atom, repeated iterations, the signal y can be the linear and Plus the last residual value to indicate. It is very clear that the signal y is a linear combination of these atoms, assuming that the residuals are within the range that can be ignored. Suppose you choose the atom that best matches the signal y? How to construct sparse approximation and seek residuals? How to iterate? Let us specifically describe the steps for signal decomposition using MP: [1] computes the inner product of each column (atom) of the signal Y and the dictionary matrix, selects an atom with the largest absolute value, which is the most matched to the signal y in this iterative operation. Describe the narrative in terms of jargon: make a signal, select the most matching atom from the dictionary matrix, satisfy, r0 the column index that represents a dictionary matrix. In this way, the signal y is decomposed into the vertical projection component and the residual value two part of the most matched atom, i.e.:. [2] for the residual value r1f Step [1] the same decomposition, then the K-step can be obtained:

, which satisfies. Visible, after the K-step decomposition, the signal y is decomposed into:, among them.

2.2 Further discussion

(1) Why is it assumed to be in the Hilbert space? Hilbert space is the definition of a complete internal product empty. Obviously, the MP calculation uses the inner product of the vector, so the signal decomposition in the Hilbert space is taken for granted. What is a complete inner product space? Please search yourself for a limited space.

(2) Why the atom has to be normalized in advance, that is, the descriptive narrative above. The inner product is often used to calculate the projection length of a vector in one direction, at which point the vector of the direction must be a unit vector. The most matched atom in MP is the one that chooses the largest inner product, the longest of the vertical projection length of the signal (or residual value) in the Atom (unit), for example, the projection length during the first decomposition. , three vectors, which form a triangle, and are orthogonal (can not be said perpendicular, but can imagine the two-dimensional space of both vectors are vertical).

(3) MP algorithm is convergent, because, and orthogonal, by these two can be obtained, each residual value is smaller than the previous one, therefore convergence.

Disadvantages of the 2.3 MP algorithm

As mentioned above, it is assumed that the vertical projection of the signal (residual value) in the selected atom is non-orthogonal, which makes the result of each iteration more optimal than the suboptimal one, and the convergence needs to be iterative. For example: in two-dimensional space, there is a signal y is d=[x1, X2] to express, the MP algorithm iteration will find always on the X1 and X2 repeated iterations, that is, the signal (residual value) in the selected atoms in the vertical projection of the non-orthogonality of the cause. To describe the narrative in a rigorous manner [1] may be easy to understand: In the Hilbert space H,,, the definition is that it is one of these vectors of the spanned, MP constructs a form of expression:; Here the PVF represents an orthogonal projection operation of F on V, then the MP algorithm K The results of the iteration can perform the sample example below (the signal for the previous descriptive narrative is Y, here it becomes F, please note):

The assumption is an optimal K-term approximation, when and only if. Because MP can only guarantee, so the general situation is suboptimal. What does that mean? is the linear representation of K-items, the value of this combination as an approximation, only in the K-residuals and orthogonal, is the optimal one. Assuming that the residual value of k and orthogonal, meaning that the residual value and the random one of the FK is linearly independent, then the K residual value in the subsequent decomposition process, it is not possible to appear in the FK of the item, which is the best. Under normal circumstances, this condition can not be satisfied, MP generally can only satisfy the K residuals and XK orthogonal, which is why the previous mention of "signal (residual value) in the selected atom vertical projection is non-orthogonal" reason. Assuming that the K residuals and the FK are not orthogonal, then the subsequent iterations will also appear in the FK in the occurrence of the item, it is obvious that FK is not optimal, which is why MP convergence requires many other iterations of the cause. It is not said that MP must not get the optimal solution, and the characteristics of the previous description of the narrative lead to the general can not get the best solution but suboptimal solution. Well, there is no way to let the K residuals and orthogonal, the method is there, this is the following to be discussed in the OMP algorithm.

3.OMP algorithm

3.1 Algorithm descriptive narration

The improvement of the OMP algorithm lies in the orthogonal processing of all selected atoms at each step of the decomposition process, which makes the OMP algorithm converge faster in the same precision requirement.

So how do you do the orthogonal processing of all the atoms of your choice at each step? Before formally describing the OMP algorithm, first look at the basic idea.

First look at a K-order model, indicating that the signal F after the K-step decomposition of the situation, it seems very familiar, but to note that it is different from the MP algorithm, its residual value and the preceding each component orthogonal, which is why this algorithm more than one orthogonal reason, MP only with the recently selected one of the orthogonal.

(1)

K + 1 order models such as the following:

(2)

Subtract the K-order model from the K + 1-order model, for example:

(3)

We know that the atom of the dictionary matrix D is non-orthogonal, introducing an auxiliary model, which represents a dependency on the first K-items, descriptive narratives such as the following:

(4)

Similar to the previous descriptive narrative, one of the orthogonal projection operations in span (x1, ... xk), after which the item is a salvage value. This relationship is described in mathematical notation:

Please note that the superscript of A and b here represents the value at step K.

Bring (4) into (3), with:

(5)

Assuming that two formulas are established, (5) must be established.

(6)

(7)

Order, have

Of

The value of AK is very easy by the method, by adding to the left and right sides of (7) for the internal product reduction obtained:


The second item in the back is 0 because they are orthogonal, so we can draw the first part of the AK. For, in (4) on the left and right side of the product, the second part of the AK can be obtained.

for (4), the steps to find out, please refer to the Calculation Details section of the reference document. Why not mention it here, as there will be simpler ways to calculate it later.

3.2 Proof of convergence

By (7), because with orthogonal, the two residuals are moved to the right after the Ivan squared, and the value of the AK into can be obtained:


It can be seen that each residual difference is smaller than the last one, and the visible is convergent.

3.3 Algorithm Steps

The process of the entire OMP algorithm is for example the following:

Because of the above, the algorithm is quite well understood.

To here is not complete, and then the OMP iterative operation with the second method can be calculated to know that a classmate of the paper [2] descriptive narrative is very good, I directly quoted in:

Comparing Chinese and English descriptive narrative, the essence is the same, there is only a slight difference. Here by the way to put out the network a buddy wrote the OMP algorithm code, source unknown, shared to everyone.

Another foreign ox paper[3] in the description of the OMP, the reason for the introduction, is due to its descriptive narrative is very rigorous, but also a bit bitter difficult to understand, but with the above foundation, it is easier.


The sweep step in its descriptive narrative is to look for an index that is listed in the dictionary matrix D when it is the largest inner product of the current residuals, and this step describes why the inner product is the largest and how to choose. See, the words are very clear.


Its algorithmic steps in Update Provisional solution are very easy, that is, B = Ax is known to A and b for X, in the smallest two of x is a pseudo-inverse and b multiplication, namely:


Looks headache, its useful matlab very easy, see the above MATLAB code is clear.

We can see that the algorithm flow is clear and well understood. This is the charm of the OMP algorithm, as the tool is simple to use, behind the very interesting ideas hidden.

The purpose of this blog is to search for a bit, MP and OMP no one very specific introduction. The literature [1] speaks very clearly, and everyone is interested in being able to look for it. Don't let the boss find out that I'm searching Chinese literature and writing Chinese blogs.


References:

[1] orthogonal Matching pursuit:recursive Function Approximat ion with applications to Wavelet decomposition
[2]http://wenku.baidu.com/view/22f3171614791711cc7917e4.html

[3] from Sparse Solutions of Systems of equations to Sparse Modeling of signals and Images

MP algorithm and OMP algorithm and its thought

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.