Material Detail

Convex Relaxation and Estimation of High-Dimensional Matrices

Convex Relaxation and Estimation of High-Dimensional Matrices

This video was recorded at 14th International Conference on Artificial Intelligence and Statistics (AISTATS), Ft. Lauderdale 2011. Problems that require estimating high-dimensional matrices from noisy observations arise frequently in statistics and machine learning. Examples include dimensionality reduction methods (e.g., principal components and canonical correlation), collaborative filtering and matrix completion (e.g., Netflix and other recommender systems), multivariate regression, estimation of time-series models, and graphical model learning. When the sample size is less than the matrix dimensions, all of these problems are ill-posed, so that some type of structure is required in order to obtain interesting results. In recent years, relaxations based on the nuclear norm and other types of convex matrix regularizers have become popular. By framing a broad class of problems as special cases of matrix regression, we present a single theoretical result that provides guarantees on the accuracy of such convex relaxations. Our general result can be specialized to obtain various non-asymptotic bounds, among them sharp rates for noisy forms of matrix completion, matrix compression, and matrix decomposition. In all of these cases, information-theoretic methods can be used to show that our rates are minimax-optimal, and thus cannot be substantially improved upon by any algorithm, regardless of computational complexity.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.