# Material Detail

## Stochastic Methods for L1 Regularized Loss Minimization

This video was recorded at 26th International Conference on Machine Learning (ICML), Montreal 2009. We describe and analyze two stochastic methods for $\ell_1$ regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature/example is uniformly at random. Our theoretical runtime analysis suggests that the stochastic methods should outperform state-of-the-art deterministic approaches, including their deterministic counterparts, when the size of the problem is large. We demonstrate the advantage of stochastic methods by experimenting with synthetic and natural data sets.

#### Quality

• User Rating
• Learning Exercises
• Bookmark Collections
• Course ePortfolios
• Accessibility Info