Compression perception is a sampling method that is similar to the transformation code, which is widely used in modern communication systems involving large-scale data sampling. The transformation code transforms the input signal in the high dimensional space into a signal in a very low-dimensional space. Examples of transform encoders are the well-known wavelet transforms and the ubiquitous Fourier transforms.
The compression-sensing technique succeeds in converting the transform encoding to a compressible signal or to a sparse signal. A k sparse n-dimensional discrete time signal x is encoded by calculating a m-dimensional measurement vector y, and y is the linear projection of X. This can be done by the following concise expression: Y=phi*x. Here, Phi represents a m*n matrix, usually in the field of real numbers. In this framework, the projection base is assumed to be irrelevant, and at this base the signal can have a sparse representation.
Although the reconstruction signal is an ill-determined problem, the apriori of signal sparsity makes it possible to solve a problem. A very well-known result of CS theory is the ability to use an optimized strategy for signal reconstruction by finding the most sparse signal to make y=phi*x. In other words, the refactoring problem can be attributed to the L0 optimization problem. In the case of no noise, the L0 optimization requires only m=2k random projection to reconstruct the sparse signal. Unfortunately, the L0 optimization problem is a np-hard problem. This problem has led to a lot of CS theory Research and practice, the practice is mainly around the design of low computational complexity of the measurement and reconstruction algorithm.
The work of Donoho and Candes shows that CS reconfiguration is indeed a polynomial-time problem, although it is constrained by more than 2K measurements. These findings indicate that it is not necessarily necessary to use the solution L0 optimization problem to reconstruct, and by solving a simpler L1 optimization problem, this can be solved by linear programming problem. L1 and L0 are equivalent under certain conditions, as long as the measurement matrix satisfies certain rip conditions.
Although LP technology is very important in the design of reconstruction algorithms, their computational complexity is still very high, it is difficult to apply to many applications. In these examples, the need for fast decoding algorithms-preferably linear time-is important, although the number of measurements has to be increased. Several low-complexity refactoring techniques have been recently introduced, including group-based methods and confidence-propagation-borne algorithms.
Recently, a kind of iterative greedy algorithm has attracted people's attention, because these algorithms have low computational complexity and good geometrical interpretation. Including OMP, romp, and Stomp. The basic starting point for these methods is to iterate over the support set of unknown signals. In each iteration, one or more coordinates of the vector x are selected for testing, and the method of testing is to calculate the correlation coefficients between the regular measurement vectors and the Phi columns. If it is considered good enough, the selected column is gradually selected into the current support set of X. The tracking algorithm iterates through such steps until all coordinates of the correct support set are selected into the estimated support set. The computational complexity of the OMP strategy relies on the number of iterations required for proper refactoring: standard omp typically runs K iterations, so its refactoring complexity is approximately O (KmN) (More information view Section-iv C). This computational complexity is much lower than the LP algorithm, especially when the sparse k of the signal is very small. However, the tracking algorithm does not have the same level of refactoring performance assurance as the LP algorithm. In order to ensure the success of the OMP algorithm, the correlation coefficients between any two columns of Phi are not more than 1/2k, which is proved by the Gershgorin circle theorem, which is stricter than the requirements of RIP. The romp algorithm can reconstruct all K sparse signals, requiring PHI to satisfy the RIP condition of a particular parameter (DELTA_{2K}<=0.06/SQRT (log (K))), which requires a stronger rip condition than a normal L1 linear programming problem, and a sqrt on the denominator ( Log (K)).
The main contribution of this paper is to propose a new algorithm called SubSpace Tracking Algorithm (SP). It has a proven refactoring performance similar to the LP algorithm, and the computational complexity is very low. This algorithm can be used without noise and noise. In the case of no noise, if the matrix Phi satisfies the RIP condition with certain parameters, the SP algorithm can reconstruct the original signal accurately. When the measurement is not accurate, or the signal is not strictly sparse, the reconstructed distortion has an upper bound, which is related to the constant number of times and the perturbation energy of the measurement. For very sparse signals, k<=const*sqrt (N), the upper bound of computational complexity is O (MNK), which can even reach O (Mnlog (K)) when the sparse degree of the signal is smaller.
Original: Http://dsp.rice.edu/sites/dsp.rice.edu/files/cs/SubspacePursuit.pdf
Reprint Please specify source: http://blog.csdn.net/zhyoulun/article/details/41978129
Compression-aware--sp (subspace Pursuit) reconstruction algorithm preface translation