Material Detail

Variational Inference and Experimental Design for Sparse Linear Models

Variational Inference and Experimental Design for Sparse Linear Models

This video was recorded at Workshop on Sparsity and Inverse Problems in Statistical Theory and Econometrics, Berlin 2008. Sparsity is a fundamental concept in modern statistics, and often the only general principle available at the moment to address novel learning ap- plications with many more variables than observations. Despite the recent advances of the theoretical understanding and the algorithmics of sparse point estimation, higher-order problems such as covariance estimation or optimal data acquisition are seldomly addressed for sparsity-favouring mod- els, and there are virtually no scalable algorithms. We provide an approximate Bayesian inference algorithm for sparse lin- ear models, that can be used with hundred thousands of variables. Our method employs a convex relaxation to variational inference and settles an open question in continuous Bayesian inference: The Gaussian lower bound relaxation is convex for a class of super-Gaussian potentials including the Laplace and Bernoulli potentials. Our algorithm reduces to the same computational primitives used for sparse estimation methods, but requires Gaussian marginal variance esti- mation as well. We show how the Lanczos algorithm from numerical math- ematics can be employed to compute the latter. We are interested in Bayesian experimental design, a powerful framework for optimizing measurement architectures. We have applied our framework to problems of magnetic resonance imaging design and reconstruction.


  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material


Log in to participate in the discussions or sign up if you are not already a MERLOT member.