Material Detail

Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning

Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning

This video was recorded at Video Journal of Machine Learning Abstracts - Volume 2. We consider the minimization of a convex objective function defined on a Hilbert space, which is only available through unbiased estimates of its gradients. This problem includes standard machine learning algorithms such as kernel logistic regression and least-squares regression, and is commonly referred to as a stochastic approximation problem in the operations research community. We provide a non-asymptotic analysis of the convergence of two well-known algorithms, stochastic gradient descent (a.k.a.Robbins-Monro algorithm) as well as a simple modification where iterates are averaged (a.k.a.Polyak-Ruppert averaging). Our analysis suggests that a learning rate proportional to the inverse of the number of... Show More
Rate

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.
hidden