Material Detail
Thompson Sampling: a provably good Bayesian heuristic for bandit problems
This video was recorded at Large-scale Online Learning and Decision Making (LSOLDM) Workshop, Cumberland Lodge 2013. Multi-armed bandit problem is a basic model for managing the exploration/exploitation trade-off that arises in many situations. Thompson Sampling [Thompson 1933] is one of the earliest heuristic for the multi-armed bandit problem, which has recently seen a surge of interest due to its elegance, flexibility, efficiency, and promising empirical performance. In this talk, I will discuss recent results showing that Thompson Sampling gives near-optimal regret for several popular variants of the multi-armed bandit problem, including linear contextual bandits. Interestingly, these works provide a prior-free frequentist type analysis of a Bayesian heuristic, and thereby a rigorous support for the intuition that once you acquire enough data, it doesn't matter what prior you started from because your posterior will be accurate enough.
Quality
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info