Material Detail

A Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler Divergences

A Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler Divergences

This video was recorded at 24th Annual Conference on Learning Theory (COLT), Budapest 2011. We consider a Kullback-Leibler-based algorithm for the stochastic multi-armed bandit problem in the case of distributions with finite supports (not necessarily known beforehand), whose asymptotic regret matches the lower bound of Burnetas and Katehakis (1996). Our contribution is to provide a finite-time analysis of this algorithm; we get bounds whose main terms are smaller than the ones of previously known algorithms with finite-time analyses (like UCB-type algorithms).

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.