• 1 First-Order Method. At each iteration, we only need to evaluate the function value and the gradient; and thus the algorithms can handle large-scale sparse data.
  • 2 Optimal Convergence Rate. The convergence rate O(1/k2) is optimal for smooth convex optimization via the first-order black-box methods.
  • 3 Efficient Projection. The projection problem (proximal operator) can be solved efficiently.
  • 4 Pathwise Solutions. The SLEP package provides functions that efficiently compute the pathwise solutions corresponding to a series of regularization parameters by the “warm-start” technique.

Revision History

  • SLEP 4.1: Added Ordered Tree-Nonnegative Max-Heap
  • SLEP 4.0: Added sparse group Lasso, tree structured group Lasso, and overlapping group Lasso
  • SLEP 3.0: Added fused Lasso and sparse inverse covariance estimation
  • SLEP 2.0: Added trace norm regularized learning
  • SLEP 1.1: Added ℓ1/ℓ2-constrained sparse learning
  • SLEP 1.0 was released, August 2009

Citation

J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections. Arizona State University, 2009.

Click here for the bibtex format.

Description: Description: Description: counter in iweb