Material Detail

Online and Batch Learning Using Forward-Looking Subgradients

Online and Batch Learning Using Forward-Looking Subgradients

This video was recorded at NIPS Workshop on Optimization for Machine Learning, Whistler 2008. Classical optimization techniques have found widespread use in machine learning. Convex optimization has occupied the center-stage and significant effort continues to be still devoted to it. New problems constantly emerge in machine learning, e.g., structured learning and semi-supervised learning, while at the same time fundamental problems such as clustering and classification continue to be better understood. Moreover, machine learning is now very important for real-world problems with massive datasets, streaming inputs, the need for distributed computation, and complex models. These challenging characteristics of modern problems and datasets indicate that we must go beyond the ""traditional optimization"" approaches common in machine learning. What is needed is optimization ""tuned"" for machine learning tasks. For example, techniques such as non-convex optimization (for semi-supervised learning, sparsity constraints), combinatorial optimization and relaxations (structured learning), stochastic optimization (massive datasets), decomposition techniques (parallel and distributed computation), and online learning (streaming inputs) are relevant in this setting. These techniques naturally draw inspiration from other fields, such as operations research, polyhedral combinatorics, theoretical computer science, and the optimization community. More information about workshop - http://opt2008.kyb.tuebingen.mpg.de/

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Browse...

Disciplines with similar materials as Online and Batch Learning Using Forward-Looking Subgradients

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.