Material Detail

LaRank, SGD-QN - Fast Optimizers for Linear SVM

LaRank, SGD-QN - Fast Optimizers for Linear SVM

This video was recorded at 25th International Conference on Machine Learning (ICML), Helsinki 2008. Originally proposed for solving multiclass SVM, the LaRank algorithm is a dual coordinate ascent algorithm relying on a randomized exploration inspired by the perceptron algorithm [Bordes05, Bordes07]. This approach is competitive with gradient based optimizers on simple binary and multiclass problems. Furthermore, very few LaRank passes over the training examples delivers test error rates that are nearly as good as those of the final solution. For this entry we ran several epochs of the LaRank algorithm until reaching the convergence criterion. The SGD-QN algorithm uses stochastic gradient descent modified using an efficient method to estimate the diagonal of the inverse Hessian. The estimation method is inspired oLBFGS [Schraudolph, 07]. Since there is a little need to update this estimated matrix at each iteration, this approximate second-order stochastic gradient method iterates nearly as fast than a classical stochastic gradient descent [Bottou98, Bottou07] but requires less iterations.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.