Material Detail

Matching Pursuit Kernel Fisher Discriminant Analysis

Matching Pursuit Kernel Fisher Discriminant Analysis

This video was recorded at Workshop on Sparsity in Machine Learning and Statistics, Cumberland Lodge 2009. We consider the problem of high-dimensional non-linear variable selection for supervised learning. Our approach is based on performing linear selection among exponentially many well-defined groups of features or positive definite kernels, that characterize non-linear interactions between the original variables. To select efficiently from these many kernels, we use the natural hierarchical structure of the kernels to extend the multiple kernel learning framework to kernels that can be embedded in a directed acyclic graph; we show that it is then possible to perform kernel selection through a graph-adapted sparsity-inducing norm, in polynomial time in the number of selected kernels. Moreover, we study the consistency of variable selection in high-dimensional settings, showing that under certain assumptions, our regularization framework allows a number of irrelevant variables which is sub-exponential in the number of observations.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.