P1 Problem Solving algorithm

Source: Internet
Author: User
Tags abs rand vars

P1 Problem Solving algorithm

1. BP

Using the Linear programming toolbox of matlab, i.e. Linprog

Main Idea: How to convert P1 problem into linear programming problem

That is, from 3.1 to 3.2.

x=[u; v], where u,v are positive, a=u-v

a=[phi,-phi] (can not show the symbol, read it just fine ...) )

B=s

Then b=ax= (u-v) =a=s

Solution: X0=linprog (c,[],[],a,b,zeros (2*p,1));

The optimal solution of the problem P1 is x0 (1:p)-x0 (p+1:2p)

Note: to remove the absolute value here, prove the Following:

2. bpdn-based Tracking De-noising

Using Quadprog

X0=quadprog (b,c,[],[],[],[],zeros (2*m,1));

X0=x0 (1:m)-x0 (m+1:2*m);

As with the BP idea, the P1 problem is converted into two planning problems,

Make X=u-v

The difference is that the BPDN allows the presence of noise and can refer to the differences between Irls-nonoise and Irls.

n=20;m=30; A=randn (n,m); W=sqrt (diag (a ' *a)); for k=1:1:m    a (:, k) =a (:, k)/w (k); ends=5;x=zeros (m,1);p os=randperm (m); x (pos (1:s)) =sign ( Randn (s,1)). * (1+rand (s,1));  The sparsity of%x is s, that is, the sparsity is gradually increased to smaxb=a*x;lamda=0.0001*2.^ (0:0.75:15); for i=1:1:21    %{
%BP bb=a ' *b; C=lamda (i) *ones (2*m,1) +[-bb;bb]; B=[a ' *a,-a ' *a;-a ' *a, a ' *a]; X0=quadprog (b,c,[],[],[],[],zeros (2*m,1)); x0=x0 (1:m)-x0 (m+1:2*m); %}
%BPDN bb=a ' *b; C=lamda (i) *ones (2*m,1) +[-bb;bb];% Note the value of lamda, which is taken as 0.05 b=[a ' *a,-a ' *a;-a ' *a, a ' *a]; X0=quadprog (b,c,[],[],[],[],zeros (2*m,1)); X0=x0 (1:m)-x0 (m+1:2*m);

  

3.IRLS Iterative Weighted least squares

L0 relaxation to l1, get P1, namely BPDN (base tracking noise reduction basis Pursuit denoising), P1 is written as the standard optimization problem is Q1, adding Lagrange multipliers.

General definition Q1 for set lock regression (LASSO)

Main Idea: to facilitate the derivation of the problem Q1,

Enter a, x, b,

The least squares solution of the problem M:

XOUT=INV (2*lambda (ll) *xx+aa) *ab;

Update of the diagonal weighted matrix x, xx is the inverse of X:

Xx=diag (1./(abs (xout) +1e-10));

When | | Stop iteration when less than the preset threshold value

Approximate solution of output problem Q1

n=100; m=200; s=4; sigma=0.1; A=randn (n,m); W=sqrt (diag (a ' *a)); for k=1:1:m,    a (:, k) =a (:, k)/w (k); end;x0=zeros (m,1);p os=randperm (m); x0 (pos (1:s)) =sign ( Randn (s,1)). * (1+rand (s,1)); b=a*x0+randn (n,1) *sigma;% the IRLS algorithm for varying lambdaxsol=zeros (m,101); Err=zeros (101,1); Res=zeros (101,1); lambda=0.0001*2.^ (0:0.15:15); Aa=a ' *a; Ab=a ' *b;for ll=1:1:101    xout=zeros (m,1);    Xx=eye (m);% Unit diagonal matrix for    k=1:1:15        xout=inv (2*lambda (ll) *xx+aa) *ab;        Xx=diag (1./(abs (xout) +1e-10));        % disp (norm (xout-x0)/norm (x0));    end;    Xsol (:, Ll) =xout;     ERR (ll) = (xout-x0) ' * (xout-x0)/(x0 ' *x0);% error    Res (ll) = (a*xout-b) ' * (a*xout-b)/sigma^2/n;% residual end;

  

How to choose Lagrange multiplier?

Rule of thumb: Choose the ratio of the approximate noise deviation value to the standard deviation of the expected non-0 value

Note: optimal when residual and noise power are equal

determine the optimal method: determine the approximate value according to the rule of thumb lambda=0.0001*2.^ (0:0.15:15);

For the calculation, the minimum error corresponds to the optimal

Posl=find (err==min (Err), 1);

Note: BPDN performs slower than Irls

4. LARS (least angle regression stagewise) minimum angular regression distribution

This is going to be a little more.

The thought of forward selection and OMP is close

Forward stagewise feeling is smoother because the step size is smaller

The step size taken by the Lar is between the two, that is, the minimum angle Regression.

The thought of combining both is Lars.

Main Ideas:

is the sparse (regression) coefficient column vector, which is the estimated vector

First take, that is

c is proportional to the correlation between the X and the current residual vector (strange: )

Take |cj| the biggest direction

Algorithm Steps:

    1. j, and constantly update a=[a,j];

      For ease of understanding, you can introduce XA

    2. Calculate gamma

15.16 Two-style:

3. update mu

4. Seeking Bita

n=100; m=200; s=4; sigma=0.1; A=randn (n,m); W=sqrt (diag (a ' *a)); for k=1:1:m, a (:, k) =a (:, k)/w (k); end;x0=zeros (m,1);p os=randperm (m); x0 (pos (1:s)) =sign (randn (s,1 ). * (1+rand (s,1)); b=a*x0+randn (n,1) *sigma; Err=zeros (n,1); Res=zeros (n,1);    Xsollars=lars (a, b);% seek sparse vector for k=1:1:n xout=xsollars (k,:) '; Errlars (k) = (xout-x0) ' * (xout-x0)/(x0 ' *x0), end;function beta = Lars (x, y)% for sparse vectors [n,m]=size (×); nvars=min (n-1,m);??? Mu=zeros (n,1); beta = zeros (2*nvars, m); i=1:m;    A=[];k=0;vars=0;gram=x ' *x;while vars<nvars k=k+1;    C=x ' * (y-mu);    [c,j]=max (abs (C (i))),%c (i) that does not contain the previously taken J J=i (j);    A=[a j];    vars=vars+1;    I (i==j) =[];% here is not zero, but directly removes J s=sign (c (A));    G=s ' *inv (gram (a,a)) *s;    aa=g^ ( -1/2);    WA=AA*INV (gram (a,a)) *s;    U=x (:, A) *wa;    If Vars==nvars gamma=c/aa;        else A=x ' *u; Temp=[(c-c (i))./(aa-a (i));(c+c (i))./(aa+a (i))];% minus the j-column vectors in C and a that do not belong to a, resulting in gamma gamma=min ([temp (temp>0);    c/aa]);    End mu=mu+gamma*u; Beta (k+1,a) = Beta (k,a) + gamma*wa '; endif size (beta,1> k+1 Beta (k+2:end,:) = [];% only takes nvars, beta is sparse vector endend 

  

Expansion: hard threshold, Soft threshold value

If A is the Emirates matrix, then AA (t) =i, So the P0 problem can be written as:

Make

The solution with the smallest error, that is, the closest sparse vector, simply select the first few items by descending the absolute value.

where T () is a hard threshold Value. How to take it?

With the thought of Lars

Lambda is a soft threshold value

P1 Problem Solving algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.