Sparse representation software and toolkit

Source: Internet
Author: User
Tags benchmark

Fast L-1 minimization algorithms: homotopy and augmented Lagrangian method
-- Implementation from fixed-point mpus to kernel-core CPUs/GPUs

Allen Y. Yang, Arvind Ganesh, zihan Zhou,
Andrew Wagner, Victor Shia, Shankar sastry, and Yi Ma

Copyright Notice: it is important that you read and understand the copyright of the following software packages as specified in the individual items. The copyright varies with each
Package Due to its author (s). The packages shocould not be used for any other cial purposes without direct consent of their author (s ).

This project is partially supported by NSF trust center at UC Berkeley, Aro muri W911NF-06-1-0076, ARL MAST-CTA W911NF-08-2-0004.

Publications
  1. Allen Yang, Arvind Ganesh, zihan Zhou, Shankar sastry, and Yi Ma.
    A review of fast 11-minimization algorithms for robust face recognition. (preprint)
  2. Allen Yang, Arvind Ganesh, Shankar sastry, and Yi Ma.
    Fast l1-Minimization algorithms and an application in robust face recognition: a review.
    ICIP 2010.
  3. Victor Shia, Allen Yang, Shankar sastry, Andrew Wagner, and Yi Ma.
    Fast l1-Minimization and parallequalation for face recognition. Asilomar 2011.
MATLAB benchmark scripts

  • L-1 benchmark package: http://www.eecs.berkeley.edu /~ Yang/software/l1benchmark/l1benchmark.zip
The package contains a contain lidated Implementation of nine il-mini mization algorithms in MATLAB. Each function uses a consistent set of parameters (e.g., stopping criterion and tolerance) to interface with our benchmark scripts.
  1. Orthogonal Matching Pursuit: solveomp. m
  2. Primal-Dual Interior-point method: solvebp. m
  3. Gradient Projection: solvel1ls. m
  4. Homotopy: solvehomotopy. m
  5. Polytope faces pursuit: solvepfp. m
  6. Iterative thresholding: solvesparsa. m
  7. Proximal gradient: solvefista. m
  8. Primal augmented Laplace multiplier: solvepalm. m
  9. Dual augmented Laplace multiplier: solvedalm. m; solvedalm_fast.m
The package also contains a script to generate the synthetic data shown in the paper [1].
Note:
1. To run the alternating direction method (yall1), one needs to separately download the package from its authors (following the link at the end of the page ).
2. Please properly acknowledge the respective authors in your publications when you use this package.


Single-core L-1 minimization library in C

  • Homotopy and Alm algorithms implemented in C with MATLAB wrapper:
    Http://www.eecs.berkeley.edu /~ Yang/software/l1benchmark/L1-Homotopy-ALM.zip
Fixed-Point L-1 Minimization for Mobile Platforms
  • Fixed-Point Homotopy Algorithm implemented in Java:
    Http://www.eecs.berkeley.edu /~ Yang/software/l1benchmark/fixed_point_homotopy_java.zip
Kernel-L-1 1 minimization library in C/Cuda

  • Coming soon...
Benchmark Results
Simulations
  • Noiseless Delta-rhoplot at 95% confidence


The Delta-ROV plot measures the percentage of successes to recover a sparse signal at pairs of (delta, rock) combinations, where Delta = D/N is the sampling rate and rock = K/n
Is the sparsity rate. then a fixed success rate of 95% over all Delta's can be interpolated as a curve in the plot, as shown on the left. in general, the higher the success rates, the better an algorthm recovers dense Signals
In L-1 problem.

Observations:

  1. Without concerns about speed and data noise, the success rate of the Interior-point method pdipa is the highest of all the algorithms in the figure, especially when the signal becomes dense.
  2. The success rates of l1ls and homotopy are similar, and they are very close to those of pdipa.
  3. The success rates of fista and dalm are comparable over all sampling rates. The performance also shows significant improvement over the IST algorithm, namely, sparsa.
  • Fixed low sparsity simulation (only speed is shown here)


The figure on the left shows the average run time over various projection dimensions D, where the ambient dimension is n = 2000. A low sparsity is fixed at k = 200.

Observations:

  1. The computational complexity of pdipa grows much faster than the other algorithms. more secure, in contrast to its noise-free performance, the estimation error also grows exponentially, in which case the algorithm fails
    To converge to an estimate that is close to the ground truth (please refer to the paper ).
  2. L1ls and homotopy take much longer time to converge than sparsa, fista, and dalm.
  3. The average run time of dalm is the smallest over all projection dimensions.

  • Fixed high sampling rate simulation (only speed is shown here)


The figure on the left shows the average run time over various sparsity ratios ROV, where the ambient dimension is again n = 2000. A high sampling rate is fixed at d = 1500.

Observations:

  1. Again, pdipa significantly underperforms compared to the rest five algorithms in terms of both accuracy and speed.
  2. The average run time of Homotopy grows almost linearly with the sparsity ratio, while the other algorithms are relatively unaffected. Thus, homotopy is more suitable for processing where the unknown signal is expected
    A very small sparsity ratio.
  3. Dalm again is the fastest algorithm compared to sparsa and fista.

Robust Face Recognition


    Under construction...

    The CMU multi-pie database can be purchased from here: http://cmu.wellspringsoftware.net/invention/detail/2309/

    • Solving cross-and-Bouquet model in Robust Face Recognition

    This Experimenent selects 249 subjects from multi-pie, chooses 7 extreme illumination conditions as the training images. The testing images are already upted at random pixel coordinates from 0% to 90%. We
    Measure the average classification rate and the speed under different resume uption percentages.

    Observations:

    1. In terms of accuracy, homotopy achieves the best overall performance. the performance of pdipa is very close to homotopy, achieving the second best overall accuracy. on the other hand, fista obtains the lowest recognition rates.
    2. In terms of speed, homotopy is also one of the fastest algorithm, especially when the pixel resume uption percentage is small.
    Other public L-1 minimization Libraries

    • Sparselab:
      Http://sparselab.stanford.edu/
      • Orthogonal Matching Pursuit (OMP): solveomp
      • Primal-dual basis pursuit (BP): solvebp. m
      • Lasso: solvelasso. m
      • Polytope faces pursuite (PFP): solvepfp. m
      • L1magic:
        Http://www.acm.caltech.edu/l1magic/

        • Primal-dual basis pursuit (BP): l1eq_pd.m
      • L1ls:
        Http://www.stanford.edu/boyd/l1_ls/

        • Truncated Newton interior-point method: lw.ls.m
      • Gpsr:
        Http://www.lx.it.pt /~ MTF/gpsr/

        • Gradient Projection sparse representation: gpsr_bb
      • L1-Homotopy:
        Http://users.ece.gatech.edu /~ Sasif/homotopy/

        • Homotopy method: bpdn_homotopy_function.m
      • Sparsa:
        Http://www.lx.it.pt /~ MTF/sparsa/

        • Iterative Shrinkage-thresholding algorithm: sparsa. m
      • Fista:
        Http://www.eecs.berkeley.edu /~ Yang/software/l1benchmark/

        • Fast ist algorithm: solvefista. m
      • Fista for wavelet-based denoising:
        • Http://iew3.technion.ac.il /~ Becka/papers/wavelet_fista.zip
      • Nesta:
        Http://www.acm.caltech.edu /~ Nesta/

        • Nesterov's algorithm: Nesta. m
      • Yall1:
        Http://www.caam.rice.edu /~ Optimization/L1/yall1/

        • Alternating direction method: yall1.m
      • Bregman iterative regularization:
        Http://www.caam.rice.edu /~ Optimization/L1/Bregman/

        • Fixed-Point continuation and active set: fpc_as.m

      [Return to home]

      Related Article

      Contact Us

      The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

      If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

      A Free Trial That Lets You Build Big!

      Start building with 50+ products and up to 12 months usage for Elastic Compute Service

      • Sales Support

        1 on 1 presale consultation

      • After-Sales Support

        24/7 Technical Support 6 Free Tickets per Quarter Faster Response

      • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.