Material Detail

Near-Optimal Herding

Near-Optimal Herding

This video was recorded at 27th Annual Conference on Learning Theory (COLT), Barcelona 2014. Herding is an algorithm of recent interest in the machine learning community, motivated by inference in Markov random fields. It solves the following sampling problem: given a set X⊂Rd with mean μ, construct an infinite sequence of points from X such that, for every t≥1, the mean of the first t points in that sequence lies within Euclidean distance O(1/t) of μ. The classic Perceptron boundedness theorem implies that such a result actually holds for a wide class of algorithms, although the factors suppressed by the O(1/t) notation are exponential in d. Thus, to establish a non-trivial result for the sampling problem, one must carefully analyze the factors suppressed by the O(1/t) error bound. This paper studies the best error that can be achieved for the sampling problem. Known analysis of the Herding algorithm give an error bound that depends on geometric properties of X but, even under favorable conditions, this bound depends linearly on d. We present a new polynomial-time algorithm that solves the sampling problem with error O(d√log2.5|X|/t) assuming that X is finite. Our algorithm is based on recent algorithmic results in discrepancy theory. We also show that any algorithm for the sampling problem must have error Ω(d√/t). This implies that our algorithm is optimal to within logarithmic factors.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.