Material Detail

Piecewise-Stationary Bandit Problems with Side Observations

Piecewise-Stationary Bandit Problems with Side Observations

This video was recorded at 26th International Conference on Machine Learning (ICML), Montreal 2009. We consider a sequential decision problem where the rewards are generated by a piecewise-stationary distribution. However, the different reward distributions are unknown and may change at unknown instants. Our approach uses a limited number of side observations on past rewards, but does not require prior knowledge of the frequency of changes. In spite of the adversarial nature of the reward process, we provide an algorithm whose regret, with respect to the baseline with perfect knowledge of the distributions and the changes, is O(k \log(T), where k is the number of changes up to time T. This is in contrast to the case where side observations are not available, and where the regret is at least Omega(sqrt{T}).

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.