Smooth l0 norm accelerated sparse recovery with threshold value--English translation of the same name

Source: Internet
Author: User
Tags terminates intel core i7

Original link: thresholded smoothed l0 Norm for accelerated Sparse Recovery http://ieeexplore.ieee.org/document/7069222/

Smooth l0 norm accelerated sparse recovery with threshold values

Han Wang, Qing Guo, Member, IEEE, Gengxin Zhang, Guangxia Li, and Wei Xiang, Senior Member, IEEE

The Willow is like the wind

Absrtact: The Smoothed l0 norm (smoothed l0 norm,sl0) is a fast and scalable sparse recovery algorithm that can be extended to the complex frequency domain, which is suitable for many real-time applications. In this letter, we propose an enhanced algorithm named "Smoothed L0 norm with threshold (thresholded smoothed l0 norm,t-sl0)" To accelerate the iterative process of SL0. T-sl0 introduces an iterative efficiency indicator and compares it to a preset threshold to determine in real time whether or not to continue the iteration. By identifying and avoiding inefficient iterations, our approach converges faster than the original SL0 algorithm. The experimental results show that our method can effectively accelerate the SL0 algorithm without loss of precision.

Keywords: compression Sensing, sparse recovery, smoothing l0 norm

I. Introduction

Sparsity is an intrinsic attribute of many natural signals, from medical images to astronomical data. Thanks to the theory of compression perception [1]-[3], these sparse signals can be efficiently compressed and then recovered through a number of sparse recovery algorithms.

The Smoothed L0 norm (SL0) [4] is an effective heuristic algorithm for sparse recovery. Unlike most of the convex relaxation or greedy base tracking algorithms, SL0 uses a hierarchical non-convexity (graduated NON-CONVEXITY,GNC) [5] method, which is a deterministic version of the famous simulated annealing[6] method, In order to find the most sparse solution to optimize the non-convex l0 norm. In contrast to the L1 norm-based comparison (fista[7] or spgl1[8), SL0 can recover stably in noisy environments with weak sparsity constraints due to the objective function of the l0 norm [9]. Thanks to its proven fast and complex-frequency domain extensibility [10],sl0 can be applied to real-time signal processing, such as video-reconstructed kernel channel estimation. However, due to the non-bendable strategy (inflexible, (Fixed-count loop)), it can be seen that this algorithm converges slowly in some iterations, while in other iterations it is very fast, which means that some iterations in SL0 are inefficient. This observation has prompted us to accelerate the algorithm by avoiding invalid iterations.

To speed up the SL0, we first analyze the variable dependencies in the iteration and use a variable to represent the convergence efficiency of the real-time. Then we compare the indicators and thresholds, and if the absolute value of the indicator is greater than the threshold, the current iteration can be considered efficient and then executed. Otherwise, it will be ignored. Due to the threshold value introduced, the enhanced SL0 algorithm is greatly accelerated compared to the original.

Based on the above idea, this letter presents an enhanced algorithm called "smoothed L0 norm with threshold (t-sl0)". The experimental results show that our method can accelerate the algorithm twice times without loss of precision. The remainder of the algorithm is organized below. In the second paragraph, the original SL0 algorithm is simply reviewed. Then, in the third paragraph, we analyzed the variable dependencies in the iterative process, introduced the basic concept of conditional iteration and proposed the T-SL0 algorithm. Finally, the experimental results are shown in the fourth paragraph to evaluate the performance of our approach.

Ii. Smoothing L0 Norm algorithm

The basic idea in CS theory is to extract a coefficient vector x from a compressed version of Y etc, in order to find the problem ${p_0}:{\min _x}{\left\| x \right\|_0},s.t.y = ax$ a solution where ${l_0}$ norm ${\left\| x \right\|_0}$ represents the number of non-0 elements in X.

Because the objective function is non-convex (and thus tricky), the SL0 algorithm is designed to solve the problem of smoothing version ${p_0}$. SL0 uses the convex function $n-{F_\sigma} (x) $ approximate ${\left\| x \right\|_0}$, where N is the dimension of vector x, ${f_\sigma} (x) = \sum\nolimits_{n = 1}^n {{F_\sigma}\l EFT ({{x_n}} \right)} $,${f_\sigma}$ belongs to a cluster when the shape parameter $\sigma \to 0$ is approximate to the Kronecker function (${\delta _{ij}} = \left\{{\begin{array}{*{20}{c} }
{0{\rm{,}}if{\rm{}}i \ne j{\rm{}}}\\
{1{\rm{,}}if{\rm{}}i = j}
\end{array}} \right.$) of the convex function. Thus, the original problem ${p_0}$ is equivalent to the problem $q:{\lim _{\sigma \to 0}}{\max _x}{f_\sigma} (x), S.t.y = ax$.

However, for a small $\sigma $, the function ${f_\sigma}$ contains many local maximums and its maximization is not easy, instead, when $\sigma $ is large enough it does not have a local maximum value. Thus, SL0 using GNC, a heuristic method can optimize the non-convex function by the shape parameter of the gradual contraction convex smoothing function, to avoid the ${f_\sigma}$ falling into the local maximum value. Specifically, not the problem q,sl0 a series of questions ${q_\sigma}:{\max _x}{f_\sigma} (x), S.t.y = ax$, gradually reducing $\sigma by each cycle. Each sub-problem ${q_\sigma}$ can be solved several times using the gradient rise method. Thus, the SL0 algorithm contains a two-step loop: The outer loop gradually reduces the $\sigma $ so that the ${f_\sigma}$ tends to kronecker functions, while the inner loop for a given $\sigma $ uses the gradient rise method to solve ${q_\sigma}$ by using a few iterations, avoiding sinking into the local maximum value. SL0 implementation algorithm by using the Gaussian cluster function ${f_\sigma} (x) = {e^{-{{\left| x \right|} ^2}/2{\sigma ^2}}}$ is given in Figure 1.

Figure 1

Iii. Smoothing L0 norm algorithm with threshold value

A. Analyzing variable dependencies

To uncover the variable dependencies in the SL0 iteration, first define $G (x) \buildrel \delta \over =-x{e^{-{{(\left| x \right|/\sqrt 2 {\sigma ^{(j)}})}^2}}}$, and its biased $\partial g (x)/\partial \bar x = \left ({{\left x \left| \right|/{\sigma (j)}}} ^{) \right}-1}}^2) {\right-{{e^{} FT ({\left| x \right|/\sqrt 2 {\sigma ^{\left (J \right)}}} \right)}^2}}}$. Then in the first iteration, the nth element of the step vector $\delta {x^{(i)}}$ can be expanded to $\delta x_n^{(i)} = G\left ({x_n^{(i-1)}} \right) $. Furthermore, in the analysis, we assume that the sparse degree of the source does not exceed the theoretical boundary [9] to ensure stable recovery.

Let $i = lj$ is the total number of iterations and uses the Lagrange mean theorem, which has

$\delta x_n^{(i)}-\delta x_n^{(i)} = \frac{{\partial g}}{{\partial \bar x}}\left ({\xi _n^{(i)}} \right) \left ({x_n^{(i -1)}-x_n^{(I-1)}} \right) $, (1)

where $\xi _n^{(i)}$ falls between $x_n^{(I-1)}$ and $x_n^{(I-1)}$. When the iteration terminates, there are $\delta x_n^{(I)} \to 0$ and $\delta x_n^{(I-1)} \to {x_n}$, where ${x_n}$ is the nth element of X. Thus, (1) can be unfolded as follows

$\delta {x^{(i)}} = {d^{(i)}}\left ({{x^{(i-1)}}-X} \right) $, (2)

where ${d^{(i)}} = Diag\left ({\frac{{\partial g}}{{\partial \bar x}}\left ({\xi _1^{(i)}} \right),..., \frac{{\partial g}}{{ \partial \bar x}}\left ({\xi _n^{(i)}} \right)} \right) $ is a diagonal matrix whose inverse is ${\left ({{d^{(i)}}} \right) ^{-1}}$. Thus:

${x^{(I-1)}}-X = {\left ({{d^{(i)}}} \right) ^{-1}}\delta {x^{(i)}}$, (3)

The L2 norm is obtained on both sides of (2) and (3).

$\left\{{\begin{array}{*{20}{c}}
{{{\left\| {\delta {x^{(i)}}} \right\|} _2} \le \mathop {\max}\limits_{1 \le n \le N} \left\{{\left| {\frac{{\partial g}}{{\partial \bar x}}\left ({\xi _n^{(i)}} \right)} \right|} \right\} \cdot {{\left\| {{x^{(i-1)}}-X} \right\|} _2}}\\
{{{\left\| {{x^{(i-1)}}-X} \right\|} _2} \le \mathop {\max}\limits_{1 \le n \le N} \left\{{\left| {\frac{{\partial g}}{{\partial \bar x}}{{\left ({\xi _n^{(i)}} \right)}^{-1}}} \right|} \right\} \cdot {{\left\| {\delta {x^{(i)}}} \right\|} _2}}
\end{array}} \right.$, (4)

It's easy to see

$ \le \frac{{{{\left\| {\delta {x^{(i)}}} \right\|} _2}}}{{{{\left\| {{x^{(i-1)}}-X} \right\|} _2}}} \le 1$, (5)

Given $rmse = {\left\| {\hat X-x} \right\|_2}/\sqrt N $, (5) written as

${\left\| {\delta {x^{(i)}}} \right\|_2} = {c^{(i)}}rms{e^{(i-1)}}$, (6 )

Where scale factor ${c^{(i)}} \in \left[{0,\sqrt N} \right],i = 1,..., i$. Since GNC is based on iterations that are quasi-stationary, the recovered vector ${x^{(i-1)}}$, along with the corresponding $rms{e^{(i-1)}}$ and step vectors $\delta {x^{(i)}}$ which is determined by ${x^{(i-1)}}$, In successive iterations [4] is slightly changed. Therefore, the scale factor ${c^{(i)}}$ also changes slightly in the current iteration and the previous iteration, i.e. $r_c^{(i)} = {c^{(i)}}/{c^{(i-1)}} \approx 1$.

${c^{(i) the stable properties of}}$ imply that we can cross (6) loose boundaries and predict ${\left\| {\delta {x^{(i)}}} \right\|_2}$, the Euclidean length of the step vector of the second iteration is proportional to the rmse of the nth i-1 time. Because x is unknown thereby rmse can not be obtained in the iterative process, the step size ${\left\| {\delta x} \right\|_2}$ can be used as an indicator to reflect the real-time RMSE level of the solution.

B. The main ideas of our approach

In the SL0 algorithm, the recovery vector gradually decreases with each iteration Rmse gradually approaches its true value x. However, it is important to note that in the inner loop of Figure 1, the efficiency always executes L times. For example, an non-adjustable policy may result in some inefficient levels. Our extensive simulation results show that the rmse slowly decreases in some iterations while others are quickly reduced, so that the iteration efficiency changes over each iteration. Given the underlying idea behind this observation, the T-SL0 algorithm introduces iterative thresholds to perform only efficient iterations while avoiding inefficiencies.

In particular, we consider the relative change in Rmse, which reflects the efficiency of successive iterations, as follows:

$\BEGIN{ARRAY}{L}
r_{rmse}^{(i-1)} = \frac{{rms{e^{(I-1)}}-rms{e^{(i-2)}}}}{{rms{e^{(i-2)}}}}\\
{\rm{}} = \frac{{\frac{1}{{r_c^{(i)}}}{{\left\| {\delta {x^{(i)}}} \right\|} _2}-{{\left\| {\delta {x^{(i-1)}}} \right\|} _2}}}{{{{\left\| {\delta {x^{(i-1)}}} \right\|} _2}}}\\
{\rm{}} \approx \frac{{{{\left\| {\delta {x^{(i)}}} \right\|} _2}-{{\left\| {\delta {x^{(i-1)}}} \right\|} _2}}}{{{{\left\| {\delta {x^{(i-1)}}} \right\|} _2}}} = r_{step}^{(i)}
\end{array}$, (7)

where $r_{step}^{(i)}$ is the relative change of step size. The equation (7) means that the ${r_{step}}$ can be seen as an indication of real-time convergence efficiency.

Figure 2

Compared with the original SL0 algorithm in Figure 1, the proposed algorithm calculates the $r_{step}^{(i)}$ by the (7) formula and compares it to the preset threshold value $\LAMBDA $. If $\left| {r_{step}^{(i)}} \right| \ge \lambda $, due to the quasi-steady nature of GNC, this means that the RMSE in the first iteration of the (i-1) is rapidly declining and may also be the same in the first iteration. Otherwise, if $\left| {r_{step}^{(i)}} \right| < \LAMBDA $, the first and second iterations of the current loop can be considered inefficient, so avoid it entering the next loop.

The details of the algorithm t-sl0 are presented in Figure 2. Our method shares the same framework as the SL0 in Figure 1, with the addition of an additional evaluation step in the inner loop. You can see that the computational $r_{step}^{(i)}$ introduces very little computational complexity, while the new algorithm gains extraordinary speed by avoiding meaningless iterations.

The above iteration efficiency threshold of $\LAMBDA $ determines how many iterations will be executed. If $\lambda $ is selected too small or too large, the iterations will be left or insufficient. Therefore, the thresholds should be chosen appropriately. In most applications, such as video refactoring or channel estimation, the recovery algorithm is performed on a frame-by-frame. The sparse nature of the source (video sampling or channel impulse response) changes slowly in relation to the frame length, thus the iterative processing slowly changes in successive frames. Therefore, a sensitive threshold is determined by selecting "average iteration efficiency" from some frames as the threshold value.

In particular, we were able to train the SL0 algorithm from the first frame recovery process and then get the sequence of $r_{step}^{(i)},i = 1,..., i$. Make

$\lambda = \left|                                                            {\frac{1}{i}\sum\limits_{i = 1}^i {r_{step}^{(I)}}} \right|$, (8)

The average iteration efficiency as the first frame. Taking $\LAMBDA $ as the threshold for t-sl0, approximately half of the iterations can be considered invalid and can be ignored. Thus, the entire iterative process of sparse signal recovery can be accelerated by about twice times.

Figure 3

Iv. Experimental Results

Figure 3 shows the values of multiple variables during the SL0 iteration. It is obvious that the step size ${\left\| {\delta {x^{(i)}}} \right\|_2}$ and corresponding $rms{e^{(i-1)}}$ also decrease resulting in scale factor ${c^{(i)}} = {\left\| {\delta {x^{(i)}}} \right\|_2}/rms{e^{(i-1)}}$ remains stable, so $r_c^{(i)} = {c^{(i)}}/{c^{(i-1)}} \approx 1$. In a nutshell, ${\left\| {\delta x} \right\|_2}$ and $rmse$ are relative changes, that is $r_{step}^{(i)}$ and $r_{step}^{(i-1)}$, good coordination with each other. These results confirm the (6) and (7) of paragraph III, and reveal that the convergence efficiency indicator ${r_{step}}$ does reflect the real-time iterative efficiency.

In the following experiment, we first restored the sparse vector x with SL0, determined the iteration efficiency threshold of $\LAMBDA $, and then used t-sl0 to restore X. The iterative recovery process for SL0 and t-sl0 is drawn in Figure 4. It shows that all two algorithms converge with the number of iterations. A bigger $\left|. {{r_{step}}} \right|$ has a faster rmse drop. However, due to the use of thresholds $\lambda $,t-sl0 only in high $\left| {{r_{step}}} \right|$ is executed. Especially in experiments, the t-sl0 terminates at 168 iterations (14.9MS) instead of the 354 iterations (29.1ms) required by SL0 and produces Rmse at the same level as SL0.

Figure 5 compares the Rmse descent of the t-sl0 algorithm at different thresholds, where the standard threshold is derived from the (8) formula and has no threshold reference to the original SL0 algorithm. The experimental results in Figure 5 show that the lower thresholds may result in too many iterations (seen from the end), while high thresholds result in an insufficient number of iterations to approximate the real rmse. Therefore, the "average iterative efficiency" given in the formula (8) is the reasonable choice of the threshold value.

To further evaluate the resilience, we compare the phase transitions of t-slo,sl0,fista and SPGL1 in Figure 6. Phase Space $\left ({\delta, \rho} \right) \in {\left[{0,1} \right]^2}$ are divided into evenly distributed meshes where $\delta = m/n$, $\rho = k/m$ are normalized indeterminate measurements and sparsity, respectively at each grid point, the solution achieves ${10^4}$ independent sparse recovery, and the average signal run time is 17.8ms (t-sl0), 34.9ms (SL0), 87.2ms (Fista), 63.5ms (SPGL1). Based on the above repeated experiments, a phase transformation curve divides the space into two parts. That is, the lower-right part is statistically recoverable, and the upper-left part is not recoverable. The experimental results show that the recovery capability of t-sl0 is slightly different from that of SL0, and it is superior to the L1 norm solution based on Fista and SPGL1 in the same order and size computing load.

Thus, the above experimental results clearly illustrate these, due to the introduction of threshold $\lambda $,t-sl0 can increase the SL0 execution speed of twice times without loss of precision.

Figure 4

Figure 5

Figure 6

Iv. Conclusion

In this letter, we propose a smoothing l0 norm algorithm with threshold value for real-time sparse signal recovery. Based on the analysis results of the variable dependency relationship, we introduce an indicator and a threshold to evaluate the real-time iterative efficiency. By comparing the indicators and thresholds, t-sl0 only performs efficient iterations, so the convergence rate is faster than the original SL0 algorithm. Our approach shares the same framework as SL0, in addition to increasing the evaluation process during the iteration. By introducing little complexity and speeding up the algorithm by avoiding meaningless iterations. The experimental results also show that our method can significantly accelerate the SL0 algorithm without loss of precision.

Reference documents

[1] E. Candes, J. Romberg, and T. Tao, "robust uncertainty principles:exact

Signal reconstruction from highly incomplete frequency information, "

IEEE Trans. Inf. Theory, vol. 2, pp. 489–509, Feb. 2006.

[2] E. candes and T. Tao, "near-optimal signal recovery from random

Projections:universal encoding Strategies? " IEEE Trans. Inf. Theory,

Vol. 2006., pp. 5406–5425, Dec.

[3] D. Donoho, "Compressed sensing," IEEE Trans. Inf. Theory, vol. 52,

No. 4, pp. 1289–1306, Apr. 2006.

[4] H. Mohimani, M. Babaie-zadeh, and C. Jutten, "A fast approach for

Overcomplete sparse decomposition based on smoothed _0 norm, " IEEE

Trans. Signal Process, vol. 1, pp. 289–301, Jan. 2009.

[5] A. Blake and A. Zisserman, Visual reconstruction. Cambridge, MA,

Usa:mit Press, 1987, vol. 2.

[6] S. Kirkpatrick, C. D. Gelatt, Andm. P. Vecchi, "Optimization by simulated

Annealing, " Science, vol. 4598, pp. 671–680, 1983.

[7] A. Beck and M. Teboulle, "A fast iterative shrinkage-thresholding algorithm

For linear inverse problems, " SIAM J. Imaging Sci., vol. 2, No. 1,

pp. 183–202, 2009.

[8] E. van den Berg and M. P. Friedlander, "probing the Pareto frontier for

Basis Pursuit Solutions, " SIAM J. Sci. Comput., vol. 2, pp. 890–912,

2008.

[9] M. Babaie-zadeh and C. Jutten, "on the stable recovery of the sparsest

Overcomplete representations in presence of noise, " IEEE Trans. Signal

Process., vol. 5396–5400, No. Ten, pp., Oct. 2010.

G. H. Mohimani, M. Babaie-zadeh, and C. Jutten, "complex-valued

Sparse representation based on smoothed _0 norm, "in Proc. IEEE

ICASSP, Las Vegas, NV, USA, Mar, pp. 3881–3884.

Smooth l0 norm accelerated sparse recovery with threshold value--English translation of the same name

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.