Material Detail

Learning with Many Reproducing Kernel Hilbert Spaces

Learning with Many Reproducing Kernel Hilbert Spaces

This video was recorded at Workshop on Sparsity in Machine Learning and Statistics, Cumberland Lodge 2009. In this talk, we consider the problem of learning a target function that belongs to the linear span of a large number of reproducing kernel Hilbert spaces. Such a problem arises naturally in many practice situations with the ANOVA, the additive model and multiple kernel learning as the most well known and important examples. We investigate approaches based on l1-type complexity regularization and the nonnegative garrote respectively. We show that the computation of both procedures can be done efficiently and the nonnegative garrote could be more favorable at times. We also study their theoretical properties from both variable selection and estimation perspective. We establish several probabilistic inequalities providing bounds on the excess risk and L2-error that depend on the sparsity of the problem. Part of the talk is based on joint work with Vladimir Koltchinskii.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Browse...

Disciplines with similar materials as Learning with Many Reproducing Kernel Hilbert Spaces
People who viewed this also viewed

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.