Material Detail

Denoising and Dimension Reduction in Feature Space

Denoising and Dimension Reduction in Feature Space

This video was recorded at Workshop on Algorithms in Complex Systems, Eindhoven 2007. The talk presents recent work that interestingly complements our understanding the VC picture in kernel based learning. Our finding is that the relevant information of a supervised learning problem is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem. Thus, kernels not only transform data sets such that good generalization can be achieved using only linear discriminant functions, but this transformation is also performed in a manner which makes economic use of feature space dimensions. In the best case, kernels provide efficient implicit representations of the data for supervised learning problems. Practically, we propose an algorithm which enables us to recover the subspace and dimensionality relevant for good classification. Our algorithm can therefore be applied (1) to analyze the interplay of data set and kernel in a geometric fashion, (2) to aid in model selection, and to (3) denoise in feature space in order to yield better classification results. We complement our theoretical findings by reporting on applications of our method to data from gene finding and brain computer interfacing. This is joint work with Claudia Sannelli and Joachim M. Buhmann

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.