Material Detail

Exponential Families in Feature Space

Exponential Families in Feature Space

This video was recorded at Machine Learning Summer School (MLSS), Canberra 2006. In this introductory course we will discuss how log linear models can be extended to feature space. These log linear models have been studied by statisticians for a long time under the name of exponential family of probability distributions. We provide a unified framework which can be used to view many existing kernel algorithms as special cases. Our framework also allows us to derive many natural generalizations of existing algorithms. In particular, we show how we can recover Gaussian Processes, Support Vector Machines, multi-class discrimination, and sequence annotation (via Conditional Random Fields). We also show to deal with missing data and perform MAP estimation on Conditional Random Fields in feature space. The requisite background for the course will be covered briskly in the first two lectures. Knowledge of linear algebra and familiarity with functional analysis will be helpful.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.