Talking about the compression perception (16): The Rip of perceptual matrix

Source: Internet
Author: User

In the compression sense, always see the word "Matrix satisfies RIP", yes, this is a compression-aware terminology, bounded equidistant properties (Restricted isometry property, RIP).

Note: Rip nature is also for the perceptual matrix, not the measurement matrix.

0. Related concepts and symbols

1. RIP definition

Chinese version:

English version:

Summarized:

(RIP) matrix satisfies the 2K order RIP to ensure that any K sparse signal θk can be mapped to a unique Y, that is, to be able to recover K sparse signal θk by compression observation Y, it is necessary to ensure that the sensing matrix satisfies 2K order rip and satisfies the 2K order rip matrix any 2K column linearly independent.

Boundary Interpretation:

The above definition of inequality boundary about 1 symmetry, in fact, this is only a convenient representation, in practice can consider arbitrary boundary values.

2, Rip Understanding Understanding 1: Energy said

The square of the 2 norm of the vector is the energy of the signal, which is replaced by a common formula:

RIP Inequalities:

Here is actually, namely the output signal energy, namely the input signal energy (the sparse transformation x=ψθ is the orthogonal transformation, but the orthogonal transformation maintains the energy constant, namely the signal theory Parseval theorem).

Explanation 1:

Explanation 2:

RIP can actually be seen as the similarity between a matrix and a standard orthogonal array . The change of the L2 energy (Norm Square) of the vector does not exceed that of the original vector. RIP is very effective for stability analysis. RIP is presented by Candes and Tao, and can be seen in their article on this concept: decoding by linearprogramming.

In fact, when the limit is δ=0 (RIP requires 0<δ<1), the inequality of rip actually means that the energy of the observed vector y is equal to the energy of the signal X, and the orthogonal transformation in linear algebra also has this property, Also known as equidistant transformation (the signal will be two-dimensional or three-dimensional 2-norm square can be visualized as the distance from the origin), of course, the transformation here because the sensor matrix A can not be an orthogonal matrix (not a phalanx), but when the limit δ=0 also maintain the same energy (also known as equidistant Bar), and rip requires 0<δ <1, so it is not possible to offset, so it is called the finite equidistant property.

Understanding 2: The only mapping says

In the previous introduction to the Spark constants, the unique mapping has been mentioned to say this, to understand: http://www.cnblogs.com/AndyJee/p/5083726.html

RIP properties (finite equidistant properties) ensure that the perceptual matrix does not map two different k sparse signals into the same set (to guarantee a one by one mapping relationship between the original space and the sparse space), requiring that the matrix of each 2K column vector extracted from the perceptual matrix be non-singular.

When δ2s<1 can be guaranteed that the 0 norm problem has a unique sparse solution, and when Δ2s<sqrt (2) 1 can be guaranteed 0 norm and 1 norm equivalence (0 norm solution to the np-hard problem, in this case to convert it to 1 norm for optimization problem, this is a convex optimization problem)

3. RIP Supplement

What we're talking about is the perceptual matrix, and in practice we often use a measurement matrix, so how do we get the measurement matrix to meet RIP requirements?

The energy mentioned in the previous explanation mentions that "rip can actually be seen as describing the similarity between a matrix and a standard orthogonal array", for example.

Explanations about any 2K columns in the matrix are irrelevant:

If the matrix has a 2K column linearly related, then for a certain 2K sparse signal will inevitably have 2k=0, and because a 2K sparse signal can be written as two K sparse signal subtraction (2K sparse signal 2K non 0 is divided into two parts, each containing K non-0 items, respectively, The remainder of the 0 length and the original 2K sparse signal remains unchanged, that is, two K sparse signal is obtained, one of the K sparse signal K not 0 is negative one, the other part of this part is necessarily equal to 2K sparse signal), so there is a(θk1-θ K2) = 0, i.e. k1=K2, i.e. for two different k sparse signals θK1 and θK2, the same yis obtained after compression observation, That is, the unique mapping cannot be guaranteed, so the matrix cannot have a linear correlation of 2K columns, otherwise the unique mappings will not be guaranteed .

4. Reference Articles

http://blog.csdn.net/jbb0523/article/details/44565647

Talking about the compression perception (16): The Rip of perceptual matrix

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.