Material Detail

Prior Knowledge and Sparse Methods for Convolved Multiple Outputs Gaussian Processes

Prior Knowledge and Sparse Methods for Convolved Multiple Outputs Gaussian Processes

This video was recorded at NIPS Workshops, Whistler 2009. One approach to account for non-trivial correlations between outputs employs convolution processes. Under a latent function interpretation of the convolution transform it is possible to establish dependencies between output variables. Two important aspects in this framework are how can we introduce prior knowledge and how can we perform efficient inference. Relating the convolution operation with dynamical systems, we can specify richer covariance functions for multiple outputs. We also present different sparse approximations for dependent output Gaussian processes in the context of structured covariances. Joint work with Neil Lawrence, David Luengo and Michalis Titsias.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Browse...

Disciplines with similar materials as Prior Knowledge and Sparse Methods for Convolved Multiple Outputs Gaussian Processes

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.