Introduction to compression awareness (i)

Source: Internet
Author: User

  1. Compression-Aware Introduction

  

Compression sensing (compressed sensing), also known as compressed sampling, takes advantage of the sparse nature of the original scene itself or transforms to a domain, taking fewer measurements and getting enough information to reconstruct the original scene.

For example, the image generated by the scene has 2 million pixels, each pixel is represented by 8 bits, which requires 2MB of storage space, but only 100,000 effective pixels are removed after redundancy. Then we find these 100,000 effective pixels, we can reconstruct the original image better, and achieve better image compression and reconstruction. Then there are two questions: 1 How to find the key 100,000 pixels, which is the "base" of the whole image? 2) In addition to these 100,000 pixels, the remaining 1.9 million pixels are also helpful for the details of the image, leaving only the 100,000 pixels that might distort the image and how to solve it? Let's consider the first question first.

Considering the two-dimensional case, assuming that the x0 is a natural scene, y0 is the imaging result, repmat them to 1-dimensional column vectors, into n*1 x and M*1 y, where m<<n. Then the relationship between x and Y can be described as y =φx , where φ is the measurement matrix of the m*n. There are two current research hotspots: 1. How to recover M*1 x by N*1 y is an issue that is under-determined. 2. Design n to be as small as φ.

2. Recovery of sparse signals

the sparse representation of x is X =ψθ, where ψ is called the base matrix, sparse matrix, the size is n*n,θ K sparse, is the signal in a transform domain sparse representation. so y =φψθ= aθ, where a is the observation matrix.

We usually have two ways to recover data, the first is match tracking (matching pursuit): Find a base vector (wavelet) of the image, remove the wavelet in the image of the component, continue to find new and previously linearly independent vectors, and remove the associated image vector, Repeat until this set of base vectors can interpret all the data. The second is base tracking (basis pursuit), which finds a "most sparse" base in all the wavelet combinations that match the data (image), which means that the sum of the absolute values of all the coefficients is as small as possible. (This minimized result tends to force the majority of the coefficients to disappear.) ), thus obtaining the most sparse expression and increasing the compression rate. This minimization algorithm can be calculated in a reasonable time by using convex optimization algorithms such as simplex. The first method is the block, and the second method works better when there is noise.

To understand the norm, the No. 0 norm indicates that the vector is not the number of 0 elements, and the P-norm is defined as

  

If you use the No. 0 norm to decompress the problem,

The solution of the P0 is represented by δ0 (y), that is, in all vector x that satisfies the linear equation group, the least 0 elements are selected.

There are the following theorems:

Because P0 is a NP-complete problem, the solution is very extraordinary, so consider the higher norm, and define the decoding as a solution to the following problem:

Y =φx is equivalent to the superposition of y =φx and φx = 0,

The necessary and sufficient conditions for the solution of P1 and P0 can be given by using 0 spatial properties, but it is difficult to prove theoretically whether a matrix φ satisfies 0 spatial properties. So consider the matrix rip nature.

3.RIP definition

RIP is defined as

Rip understanding of the idea:

1) Energy says. RIP uses the square (energy) of the second norm to describe the stable energy properties. The inequality is divided by | | x| | 22, the quotient of the energy is in a range, that is, to maintain the original scene K important components of the length/energy, if the scene is K sparse words.

2) similarity to the orthogonal matrix. if φ is an orthogonal matrix, then the inequality must be true, but because φ is a matrix of less than a column in order to reduce the amount of the measurement, δ is used to describe the similarity between φ and the orthogonal matrix, and the smaller the δ, the more similar.

3) unique mapping. RIP Properties (finite equidistant properties) ensure that the perceptual matrix does not map two different k sparse signals into the same set (to guarantee a one by one mapping relationship between the original space and the sparse space), requiring that the matrix of each 2K column vector extracted from the perceptual matrix be non-singular.

4.RIP Supplement

all we're talking about is the observation matrix A, (we're considering the sparse representation of x, or theta) in the transformation base, which is often used in the measurement matrix φ, so how can the measurement matrix meet the RIP requirements?

Because RIP can be described as "the similarity of a matrix and a standard orthogonal array", the nature of the measurement matrix is to ensure that its base vector and the sparse representation of the base are not correlated. In practice, such as Gaussian random matrix, binary random matrix, local Fourier matrix, local hada mom matrix and so on can be a large probability to meet the RIP.

5. Citations

Http://www.cnblogs.com/AndyJee/p/5085827.html

http://blog.csdn.net/abcjennifer/article/details/7721834

Compressed sensing and Single-pixel Cameras-terrytao

Compression perception, Xuzhichang, January 12, 2012

Introduction to compression awareness (i)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.