P0 Problem Solving algorithm

Source: Internet
Author: User

P0 Problem Solving algorithm

1.OMP

Search for the maximum value of <> with minimum error

The derivation of the error,

That is, the column corresponding to SS in a is orthogonal to the residuals.

Algorithm steps:

Input sparse vector x (m*1), column normalization matrix A (n*m), ideal output B, sparsity K, error accuracy

Initialize residuals R1 to B

Find the maximum value of a ' *R1 index Posz (column index, element with corresponding X)

Update index vector ss=sort ([Ss,posz (1)])

Find | | A (:, SS) x-b| | The least Squares solution x0= PINV (A (:, SS)) *b

Seek residuals R1=b-a (:, ss) *x0= B-a (:, SS) * PINV (A (:, ss)) *b,

(This can be understood as "b-b on a (:, ss) Orthogonal projection", A (:, ss) * PINV (A (:, SS)) is an orthographic projection operator, so the next time you look for an index, the SS column will not be selected again.

While iteration until the residuals satisfy the condition (multiple loops possible)

output vector is x0, error R2 (using 2 norm Norm) 2.ls-omp

The difference from OMP is in the search for index Posz:

For all the accumulated columns of a and an alternative list of residuals R1, the index Posz is the lowest residual column, followed by the same steps

 

n=20;m=30; A=RANDN (N,M);    W=sqrt (Diag (a ' *a)); K=1:1:m A (:, K) =a (:, K)/w (k); ENDfor s=1:1:10 for ex=1:1:50 X=zeros (m,1);    Pos=randperm (m);  X (POS (1:s)) =sign (Randn (s,1)). * (1+rand (s,1));    The sparsity degree of%x is S, that is, the sparsity is gradually increased to smax b=a*x;    thrlsmp=1e-4;    %ls-omp r=b;    Ss=[];        While R ' *r>thrlsmp Z=zeros (m,1);            For J=1:1:m rtemp=b-a (:, [ss,j]) *pinv (A (:, [Ss,j])) *b;        Z (j) =rtemp ' *rtemp;        End Posz=find (Z==min (z), 1);        Ss=[ss,posz (1)];    R=b-a (:, ss) *pinv (A (:, ss)) *b;    End X0=zeros (m,1);    X0 (ss) =pinv (A (:, SS)) *b;        R0 (S,ex) =mean ((x0-x) ^2)/mean (x.^2);    %omp r=b;    Ss=[];        While R ' *r>thrlsmp z=abs (A ' *r);        Posz=find (Z==max (z), 1);        Ss=[ss,posz];    R=b-a (:, ss) *pinv (A (:, ss)) *b;    End X0=zeros (m,1);    X0 (ss) =pinv (A (:, SS)) *b;    R1 (S,ex) =mean ((x0-x) ^2)/mean (x.^2); Endendfigure (1);p lot (1:10,mean (r0 (:,:), 2), ' R '), Hold On;plot (1:10,mean (:,:), 2), ' K '), R1 (' Legend ', ' OMP '); Xlabel (' K=1:Ylabel (' err '); 

  

3.MP

Unlike Omp, where the residuals are updated, theMP does not retain and update the index values, but rather the residuals are posz (not the accumulated SS) by the currently found index value , and the output vector is updated XMP (Posz) =XMP (Posz) +a (:, Posz) ' *r1;

Note: In the orthogonal matching trace omp, the column corresponding to the SS (that is, all columns that have been selected as Posz) is orthogonal to the residuals, so it will not be re-selected as Posz, so it can guarantee rapid convergence; In MP, only the columns corresponding to the previous index values are orthogonal to the residuals.

4. Threshold Algorithm

A simplified version of OMP. The index value Posz is only taken based on the first projection, i.e.

Z=a ' *b;

[Za,posz]=sort (ABS (Z),' descend ');

5. Irls-nonoise

The algorithm is close to the irls of P1 problem, but the algorithm focuses on l0 norm relaxation, namely LP (0<p<=1). In addition, unlike irls:|b-ax|< (using least squares, i.e. residuals squared sum), Irls-nonoise is directed at B=ax (using Lagrange multipliers, i.e. direct derivation equals 0), i.e. no noise.

Can the algorithm be used for Focuss ( What is the difference between the two?). )

Similarly, the introduction of

Not reversible, then =

Note: In the subsequent calculations, the inverse is not used, but the pseudo-inverse

P0 problem converted to M: q=1, i.e. P0 problem; q=0.5, P1 problem

Replace inverse with pseudo-inverse:

And then iterate a certain number of times to get a better solution.

n=20;m=30; A=RANDN (N,M); W=sqrt (Diag (a ' *a)); for K=1:1:m    A (:, K) =a (:, K)/w (k); ENDfor s=1:1:10    X=zeros (m,1);    Pos=randperm (m);    X (POS (1:s)) =sign (Randn (s,1)). * (1+rand (s,1));  The sparsity degree of%x is S, that is, the sparsity is gradually increased to Smax    b=a*x;    For k=1:1:20    %irls-nonoise    p=1;%l0 norm    %p=0.5; L1 Norm    Xx=diag (ABS (x). ^p);     xx2=xx*xx;     X0=xx2*a ' *pinv (a*xx2*a ') *b;    RR3 (k) =mean ((x0-x) ^2)/mean (x.^2);    End    R3 (S) =RR3; Endplot (R3); title (' Err of irls-nonoise ');

  

Note: Irls run fast, and can achieve a very small error, P0 problem is a better solution.

P0 Problem Solving algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.