Material Detail

Robust Bounds for Classification via Selective Sampling

Robust Bounds for Classification via Selective Sampling

This video was recorded at 26th International Conference on Machine Learning (ICML), Montreal 2009. We introduce a new algorithm for binary classification in the selective sampling protocol. Our algorithm uses Regularized Least Squares (RLS) as base classifier, and for this reason it can be efficiently run in any RKHS. Unlike previous margin-based semi-supervised algorithms, our sampling condition hinges on a simultaneous upper bound on bias and variance of the RLS estimate under a simple linear label noise model. This fact allows us to prove performance bounds that hold for an arbitrary sequence of instances. In particular, we show that our sampling strategy approximates the margin of the Bayes optimal classifier to any desired accuracy $\ve$ by asking $\widetilde{\scO}\bigl(d/\ve^2\bigr)$ queries (in the RKHS case $d$ is replaced by a suitable spectral quantity). While these are the standard rates in the fully supervised i.i.d.\ case, the best previously known result in our harder setting was $\widetilde{\scO}\bigl(d^3/\ve^4\bigr)$. Preliminary experiments show that some of our algorithms also exhibit a good practical performance.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.