Material Detail

Online gradient descent for LS regression: Non-asymptotic bounds and application to bandits

Online gradient descent for LS regression: Non-asymptotic bounds and application to bandits

This video was recorded at Large-scale Online Learning and Decision Making (LSOLDM) Workshop, Cumberland Lodge 2013. We propose a stochastic gradient descent based method with randomization of samples for solving least squares regression. We consider a ""big data"" regime where both the dimension, d, of the data and the number, T, of training samples are large. Through finite time analyses we provide performance bounds for this method both in high probability and in expectation. In particular, we show that, with probability 1-\delta, an \epsilon-approximation of the least squares regression solution can be computed in O(d log(1/\delta)/\epsilon^2) complexity, irrespective of the number of samples T. Next, we improve the computational complexity of online learning algorithms that require to often recompute least squares regression estimates of parameters. We propose two stochastic gradient descent schemes with randomisation in order to efficiently track the true solutions of the regression problems achieving an O(d) improvement in complexity, where d is the dimension of the data. The first algorithm assumes strong convexity in the regression problem, and we provide bounds on the error both in expectation and high probability (the latter is often needed to provide theoretical guarantees for higher level algorithms). The second algorithm deals with cases where strong convexity of the regression problem cannot be guaranteed and uses adaptive regularisation. We again give error bounds in both expectation and high probability. We apply our approaches to the linear bandit algorithms PEGE and ConfidenceBall and demonstrate significant gains in complexity in both cases. Since strong convexity is guaranteed by the PEGE algorithm, we lose only logarithmic factors in the regret performance of the algorithm. On the other hand, in the ConfidenceBall algorithm we adaptively regularise to ensure strong convexity, and this results in an O(n^{1/5}) deterioration of the regret.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.