Material Detail

Limited-memory quasi-Newton and Hessianfree Newton methods for non-smooth optimization

Limited-memory quasi-Newton and Hessianfree Newton methods for non-smooth optimization

This video was recorded at NIPS Workshops, Whistler 2010. Limited-memory quasi-Newton and Hessian-free Newton methods are two workhorses of unconstrained optimization of high-dimensional smooth objectives. However, in many cases we would like to optimize a high-dimensional unconstrained objective function that is non-smooth due to the presence of a 'simple' non-smooth regularization term. Motivated by problems arising in estimating sparse graphical models, in this talk we focus on strategies for extending limited-memory quasi- Newton and Hessian-free Newton methods for unconstrained optimization to this scenario. We first consider two-metric (sub-) gradient projection methods for problems where the regularizer is separable, and then consider proximal Newton-like methods for group-separable and non-separable regularizers. We will discuss several applications where sparsity-encouraging regularizers are used to estimate graphical model parameters and/or structure, including the estimation of sparse, blockwise-sparse, and structured-sparse models.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Browse...

Disciplines with similar materials as Limited-memory quasi-Newton and Hessianfree Newton methods for non-smooth optimization

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.