Material Detail
Learning with similarity functions
This video was recorded at 27th International Conference on Machine Learning (ICML), Haifa 2010. Kernel functions have become an extremely popular tool in machine learning, with many applications and an attractive theory. This theory views a kernel as performing an implicit mapping of data points into a possibly very high dimensional space, and describes a kernel function as being good for a given learning problem if data is separable by a large margin in that implicit space. In this talk I will describe an alternative, more general, theory of learning with similarity functions (i.e., sufficient conditions for a similarity function to allow one to learn well) that does not require reference to implicit spaces, and does not require the function to be positive semi-definite (or even symmetric). In particular, I will describe a notion of a good similarity function for a given learning problem that (a) is fairly natural and intuitive (it does not require an implicit space and allows for functions that are not positive semi-definite), (b) is a sufficient condition for learning well, and (c) strictly generalizes the notion of a large-margin kernel function in that any such kernel is also a good similarity function, though not necessarily vice-versa.
Quality
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info