MP algorithm and OMP algorithm and its thought

Source: Internet
Author: User

The main introduction is MP (Matching pursuits) algorithm and OMP (orthogonal Matching Pursuit) algorithm [1], although these two algorithms were proposed in the early 90, but as a classical algorithm, domestic literature (may have I did not search to) are described only the algorithm steps and simple application, not detailed analysis of it, foreign literature or analysis is very thorough, so I combine their understanding, to analyze the blog, counted as notes.

1. Sparse representation of the signal (sparse representation of signals)

Given an over-complete dictionary matrix, where each column represents an atom of a prototype signal. Given a signal y, it can be represented as a sparse linear combination of these atoms. The signal Y can be expressed as Y = Dx, or. The so-called completeness of the dictionary matrix refers to the number of atoms far greater than the length of the signal y (its length is obviously N), that is, n<<k.

2.MP algorithm (matching tracking algorithm)

2.1 Algorithm Description

As one of the methods of sparse decomposition of signal, the signal is decomposed on the complete dictionary library.

Assume that the signal being represented is Y and its length is n. Assuming h represents Hilbert space, in this space H, a set of vectors form the dictionary matrix D, where each vector can be called an atom (atom), whose length is the same as the length n of the signal y, and these vectors have been normalized, i.e. |, that is, the unit vector length is 1. MP algorithm basic idea: from the dictionary Matrix D (also known as the Complete Atomic Library), select a signal y most match the atom (that is, a column), to construct a sparse approximation, and to find the signal residuals, and then continue to select the most matched with the signal residuals of the atom, iterative, the signal y can be from these atoms linear and, Plus the last residual value to indicate. Obviously, if the residual value is within the range that can be ignored, the signal y is the linear combination of these atoms. If you select the atom that best matches the signal Y. How to construct sparse approximation and seek residuals. How to iterate. Let us describe in detail the steps for signal decomposition using MP: [1] computes the inner product of each column (atom) of the signal Y and the dictionary matrix, selects an atom with the largest absolute value, which is the most matched to the signal y in this iterative operation. Described in terms of terminology: signaling, selecting a most matched atom from a dictionary matrix, satisfying, r0 representing the column index of a dictionary matrix. In this way, the signal y is decomposed into the vertical projection component and the residual value two part of the most matched atom, i.e.:. [2] for the residual value r1f Step [1] the same decomposition, then the K-step can be obtained:

, which satisfies. Visible, after the K-step decomposition, the signal y is decomposed into:, wherein.

2.2 Further discussion

(1) Why it is assumed to be in the Hilbert space. Hilbert space is the definition of a complete internal product empty. Obviously, the MP calculation uses the inner product of the vector, so the decomposition of the signal in the Hilbert space is a matter of course. What is the complete inner product space. Please search yourself for a limited space.

(2) Why the atom has to be normalized in advance, that is, the above description. The inner product is often used to calculate the projection length of a vector in one direction, at which point the vector of the direction must be a unit vector. The most matched atom in MP is the one that chooses the largest inner product, that is, the signal (or residual value) is the longest of the vertical projection length of the Atom (unit), such as the projection length during the first decomposition. , three vectors, which form a triangle, and are orthogonal (cannot be said perpendicular, but you can imagine two-dimensional space these vectors are vertical).

(3) MP algorithm is convergent, because, and orthogonal, by these two can be obtained, each residual value is smaller than the previous one, so convergence.

disadvantages of the 2.3 MP algorithm

As mentioned above, if the signal (residual value) in the selected atom vertical projection is non-orthogonal, this will make the results of each iteration and a lot of the best but suboptimal, convergence requires a lot of iterations. For example: in two-dimensional space, there is a signal y is d=[x1, X2] to express, the MP algorithm iteration will find always on the X1 and X2 repeatedly iterative, that is, the signal (residual value) in the selected atoms in the vertical projection of the non-orthogonality of the cause. The rigorous way of describing [1] may be easy to understand: In the Hilbert space H,,, the definition is that it is one of these vectors of the spanned, the MP constructs a form of expression:; Here the PVF represents an orthogonal projection operation on V, then the result of the K-iteration of the MP algorithm can be expressed as follows ( As described above, the signal is Y, here it becomes F, please note):

If it is an optimal K-term approximation, if and only if. Because MP can only guarantee, so the general situation is suboptimal. What does that mean? is a linear representation of K-items, the values of this combination are approximate, and only the K residuals and quadrature are optimal. If the residual value of k is orthogonal, meaning that the residual value is linearly independent of any one of the FK, then the K residual value in the subsequent decomposition process, it is not possible to appear in the FK of the item, which is the best. In general, this condition cannot be satisfied, the MP generally can only meet the K residual and XK orthogonal, which is why the previous mention of "signal (residual value) in the selected atom vertical projection is non-orthogonal" reason. If the K residuals and the FK are not orthogonal, then the subsequent iterations will also appear in the FK in the occurrence of the item, it is clear that FK is not optimal, which is why MP convergence requires more iterations of the cause. Not to say that MP must not get the optimal solution, and the characteristics described above lead to generally not get the best solution but suboptimal solution. Well, there is no way to let the K residuals and orthogonal, the method is there, this is the following to talk about the OMP algorithm.

3.OMP Algorithm

3.1 Algorithm Description

The improvement of the OMP algorithm lies in the orthogonal processing of all the selected atoms at each step of the decomposition, which makes the OMP algorithm converge faster in the same precision requirement.

So how do you do the orthogonal processing of all the selected atoms at each step? Before formally describing the OMP algorithm, first look at the basic idea.

First look at a K-order model, indicating that the signal F after the K-step decomposition of the situation, it seems very familiar, but to note that it is different from the MP algorithm, its residual value and the preceding each component orthogonal, which is why the algorithm more than one orthogonal reason, MP only with the most recently selected orthogonal.

(1)

The K + 1 order model is as follows:

(2)

Using the K + 1 order model minus the K-order model, we get the following:

(3)

We know that the atom of the dictionary matrix D is non-orthogonal and introduces an auxiliary model that represents a dependency on the first k items, described as follows:

(4)

Similar to the previous description, the orthogonal projection operation on one of the spans (x1, ... xk), followed by a salvage value. This relationship is described in mathematical notation:

Please note that the superscript of A and b here represents the value at step K.

Bring (4) into (3), with:

(5)

If the two formulas are set up, (5) must be set up.

(6)

(7)

Order, have

which

The value of the AK is very simple by the method, by adding to the left and right sides of (7) for the internal product reduction is obtained:


The second item in the back is 0 because they are orthogonal, so we can draw the first part of the AK. For, in (4) on the left and right side of the product, you can get the second part of AK.

for (4), you can find the steps to find out, see the Calculation Details section of the reference file. Why not mention it here, because it will introduce a simpler way to calculate it later.

3.2 Proof of convergence

By (7), due to orthogonal, two residuals are moved to the right after the Ivan squared, and the value of the AK can be obtained by substituting:


It can be seen that each residual difference is smaller than the last one, and the visible is convergent.

3.3 Algorithm Steps

The steps for the entire OMP algorithm are as follows:

Because of the above, the algorithm is quite well understood.

To here is not complete, and then the OMP iterative operation with another method can be calculated that a classmate of the paper [2] description is very good, I directly quoted in:

In contrast to the Chinese and English descriptions, the essence is the same, but there are subtle differences. Here by the way to put out the network a buddy wrote the OMP algorithm code, source unknown, shared to everyone.

Another foreign ox paper[3] in the description of OMP, the reason is introduced, because it describes the very rigorous, but also a bit bitter difficult to understand, but with the above foundation, it is much easier.


The sweep step in its description is to look for the index that is listed in the dictionary matrix D when the inner product of the current residuals is the largest, and this step describes why the inner product is the largest and how to select it. See the picture below and say it very clearly.


Its algorithmic steps in Update Provisional solution are simple, that is, B = Ax is known to A and b for X, in the smallest two of x is a pseudo-inverse and b multiplication, namely:


It looks like a headache, in fact, using MATLAB is very simple, see the above MATLAB code to understand.

We can see that the algorithm flow is clear and well understood. This is the charm of the OMP algorithm, as a tool to use simple, hidden behind a very interesting idea.

The purpose of this blog is to search for a bit, MP and OMP No one is very detailed introduction. Literature [1] It is very clear that everyone is interested in finding out. Don't let the boss find out that I'm searching Chinese literature and writing Chinese blogs.


Reference documents:

[1] orthogonal Matching pursuit:recursive Function Approximat ion with applications to Wavelet decomposition
[2]http://wenku.baidu.com/view/22f3171614791711cc7917e4.html

[3] from Sparse Solutions of Systems of equations to Sparse Modeling of signals and Images

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.