Compression perception theory + a classic algorithm

Source: Internet
Author: User
Tags stomp

Compression awareness

First, the theory of compression perception:

The theory of compression perception indicates that as long as the signal is compressible or sparse in a transform domain, it is possible to project the high-dimensional signal into a low-dimensional space by using an observation matrix that is not related to the transformation base, and then by solving an optimization problem, the original signal can be reconstructed with high probability from these small amount of projections. It can be proved that such a projection contains enough information to reconstruct the signal.

Novelty: The sampling rate does not depend on the bandwidth of the signal, but on the structure and content of the information in the signal.

Second, the compression perception process:

Three core problems of compression perception: sparse transformation of signal, design of observation matrix and reconstruction algorithm.

1, sparse representation of the signal

Sparse mathematical definition: signal x in the orthogonal basis of the transformation coefficient vector is, if for 0 <p <2 and r>0, these coefficients meet:

It is indicated that the coefficient vector θ is sparse in some sense.

Another definition: If the support field of the transformation factor {i: θi≠0} has a potential of less than or equal to K, then it can be said that the signal X is K-term sparse.

' Redundant dictionaries:

A hotspot in the study of sparse representations is the sparse decomposition of signals under redundant dictionaries. This is a new theory of signal representation: Replace the base function with an ultra-complete redundant function library, called a Redundant dictionary, The elements in the dictionary are called atoms. The selection of dictionaries should conform to the structure of the approximated signal as well as possible. A K-term atom with the best linear combination is found from a redundant dictionary to represent a signal called sparse approximation or a highly nonlinear approximation of a signal.

The research on sparse representation of signals in redundant dictionaries is concentrated in two areas:

(1) How to construct a redundant dictionary suitable for a certain kind of signal;

It is now known that local cosine can be used to characterize the local frequency domain of the sound signal, and the geometric edges in the image can be characterized by the Bandlet base. Other base functions with different shapes are also classified into dictionaries, such as the Gabor base for characterizing textures, the curvelet base for characterizing contours, and so on.

(2) How to design a fast and effective sparse decomposition algorithm.

From the perspective of sparse decomposition algorithm, the MP (Matching Pursuit) algorithm based on greedy iterative thought shows great superiority in the process of audio and video signal processing, but it is not the global optimal solution. Donoho and other people, proposed a base tracking (BP) algorithm. BP algorithm has the advantages of global optimization, but the computational complexity is very high. Although the MP algorithm converges faster than BP, it does not have global optimality, and the computational complexity is still very large. A series of improved algorithms, such as the orthogonal matching tracking algorithm (OMP), tree match Tracking (TMP), and segmented match tracking (StOMP) algorithm, were also developed based on greedy iterative thinking.

2, the design of the observation matrix

Here, the sampling process is non-adaptive, that is, do not need to change according to the signal X, for a given y to find θ is a linear programming problem, but because, that is, the number of equations less than the number of unknowns, this is an indeterminate problem, generally speaking, there is no definite solution. However, if Theta has K-item sparsity ( The problem is expected to find a definite solution. At this point, if we try to determine the proper position of k non-0 coefficients θi in θ, the observed vector y is a linear combination of these non-0 coefficients θi corresponding to the K-column vectors of Gamma, so that a mxk linear equation group can be formed to solve these non-0 specific values.

Finite equidistant property (RIP) gives the necessary and sufficient conditions for the existence of a definite solution, in order to make the signal completely reconstructed, it is necessary to ensure that the observation matrix does not map two different K-term sparse signals to the same sampling set, which requires that the matrix of each m column vector extracted from the observation matrix be non-singular. from which we can see The crux of the problem is how to determine the position of the non-0 coefficients to construct a solvable mxk linear equation group.

If the observation matrix and sparse base are guaranteed to be irrelevant, the ACS satisfies the Rip property in a large probability. Irrelevant is the pointing amount {φj} cannot be represented with {ψi} sparse. The more irrelevant, the more coefficients are required to represent each other; The stronger the correlation is. By selecting the Gaussian random matrix as the observation matrix, it is possible to guarantee the high probability of irrelevant and rip properties.

3, the Signal reconstruction

In order to describe the signal reconfiguration problem of the theory of compression perception more clearly, we first define the P-norm of the vector as

When P =0 is given a 0-norm, it actually represents the number of non-0 items in X.

Therefore, the problem of solving the Y-=acsx of the under-set equations is transformed to the minimum 0-norm problem under the condition of sparse or compressible signal x:

However, it is necessary to list all the possible linear combinations of all non-0 positions in X in order to obtain the optimal solution. Therefore, the numerical calculation of this formula is very unstable and NP-hard problem.

Chen, Donoho and Saunders point out that solving a simpler L1 optimization problem yields the same solution (requirements and irrelevant):

The slight difference makes the problem become a convex optimization problem, so it can simplify the problem of linear programming easily. The typical algorithm represents: BP algorithm.

The refactoring algorithms that have been presented so far can be categorized into the following three main categories:

(1) Greedy tracking algorithm: This kind of method is to gradually approximate the original signal by selecting a local optimal solution for each iteration. These algorithms include MP algorithm, OMP algorithm, piecewise omp algorithm (StOMP) and regularization omp (ROMP) algorithm.

(2) Convex relaxation method: This kind of method is solved by transforming non-convex problem into convex problem to find the approximation of signal, such as BP algorithm, interior point method, gradient projection method and iterative threshold method.

(3) Combinatorial algorithm: Such methods require signal sampling support for rapid reconstruction through packet testing, such as Fourier sampling, chain tracking and HHS (HEAVG hitters on steroids) tracking.

Iii. issues to be studied

There are some achievements in the study of the theory of compression perception, but there are still a lot of problems to be researched. Summarized in the following areas:

(1) Whether there is an optimal deterministic observation matrix for the stable reconstruction algorithm;

(2) How to construct stable, low computational complexity and less limited number of observation times to restore the compressible signal accurately;

(3) How to find an effective and fast sparse decomposition algorithm is the difficulty of the theory of compression perception under redundant dictionaries;

(4) How to design effective hardware and software to apply the theory of compression perception to solve a large number of practical problems, this research is far from enough;

(5) It is far from enough to solve the problem of P-norm optimization;

(6) The problem of signal reconfiguration in noisy signal or sampling process is also difficult, and the results are not satisfactory. In addition, the fusion of the theory of compression perception and other fields of signal processing is not enough, such as signal detection, feature extraction and so on. The research work on the intrinsic links between CS theory and machine learning has begun

Example: 0MP algorithm

When the MP algorithm is applied to CS reconstruction, the core idea is to use the random measurement matrix in nth iteration.

The current observed signal margin R (initialized to the observed signal _y) is selected as the most matched atom.

The algorithm flow of MP algorithm in CS reconstruction is as follows:

Step 1: Initialize O,.

Step 2: Select the atomic index that best matches the current observation margin in:

Step 3: Update the candidate subset

Step 4: Calculate the new estimation signal and the new observation signal margin

Step 5: Repeat step 2 through step 4 until the iteration termination criteria are met.

Based on the atomic selection criteria of the MP algorithm, the OMP algorithm spanned the selected atomic set (corresponding to CS) in each iteration, and then the signal (the corresponding observation vector y in CS) is projected on the space of the selected atomic set, and the orthogonal processing makes the observed signal margin r rapidly decrease. This reduces the number of iterations.

The function of OMP

% s-measurements; t-observation matrices; n-Vector size

function Hat_y=omp (s,t,n)

Size=size (T); % Observation matrix size

M=size (1); % measurement

Hat_y=zeros (1,n); % of spectral domain (transform domain) vectors to be reconstructed

Aug_t=[]; % increment matrix (initial value is empty matrix)

R_n=s; % Residual value

For TIMES=1:M/4; % iterations (sparsity is 1/4 of measurement)

For Col=1:n; % recovery matrix for all column vectors

Product (COL) =abs (T (:, col) ' *r_n); % Recovery matrix column vector and residual projection factor (inner product value)

End

[Val,pos]=max (product); % maximum projection coefficient corresponding to the position

Aug_t=[aug_t,t (:, POS)]; % matrix expansion

T (:, POS) =zeros (m,1); % selected column 0 (essentially should be removed, for the sake of simplicity I put it 0)

aug_y= (aug_t ' *aug_t) ^ ( -1) *aug_t ' *s; % least squares, minimizing residuals

r_n=s-aug_t*aug_y; % residual error

Pos_array (times) =pos; % record the position of the maximum projection factor

if (Norm (r_n) <9)% residual is small enough norm to find the modulus of the vector

Break

End

End

Hat_y (Pos_array) =aug_y; % reconstructed vectors

Call statement: Rec=omp (Y (:, i), r,a);

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.