Material Detail

Bounded regret in stochastic multi-armed bandits

Bounded regret in stochastic multi-armed bandits

This video was recorded at 26th Annual Conference on Learning Theory (COLT), Princeton 2013. We study the stochastic multi-armed bandit problem when one knows the value μ(⋆) of an optimal arm, as a well as a positive lower bound on the smallest positive gap Δ. We propose a new randomized policy that attains a regret uniformly bounded over time in this setting. We also prove several lower bounds, which show in particular that bounded regret is not possible if one only knows Δ, and bounded regret of order 1/Δ is not possible if one only knows μ(⋆).

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.