Material Detail

A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets

A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets

This video was recorded at 26th Annual Conference on Neural Information Processing Systems (NIPS), Lake Tahoe 2012. We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.