Material Detail

Algorithmic Strategies for Non-convex Optimization in Sparse Learning

Algorithmic Strategies for Non-convex Optimization in Sparse Learning

This video was recorded at Workshop on Sparsity in Machine Learning and Statistics, Cumberland Lodge 2009. We consider optimization formulations with non-convex regularization that are natural for learning sparse linear models. There are two approaches to this problem: 1. Heuristic methods such as gradient descent that only find a local minimum; a drawback of this approach is the lack of theoretical guarantee showing that the local minimum gives a good solution. 2. Convex relaxation such as L1-regularization that solves the problem under some conditions, but often leads to sub-optimal sparsity in reality. This talk tries to remedy the above gap between theory and practice by presenting two algorithms for direct optimization of non-convex objectives in sparse learning, for which performance guarantees can be established. The first method is a multi-stage convex relaxation scheme that iteratively refines the L1-regularization. The second approach is a greedy procedure for solving non-convex sparse regularization. We show that under appropriate conditions, both procedures lead to desirable local solutions of sparse non-convex formulations that are superior to the global solution of L1-regularization.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.