Material Detail

Linear Bellman Equations: Theory and Applications

Linear Bellman Equations: Theory and Applications

This video was recorded at NIPS Workshops, Whistler 2009. I will provide a brief overview of a class stochastic optimal control problems recently developed by our group as well as by Bert Kappen's group. This problem class is quite general and yet has a number of unique properties, including linearity of the exponentially-transformed (Hamilton-Jacobi) Bellman equation, duality with Bayesian inference, convexity of the inverse optimal control problem, compositionality of optimal control laws, path-integral representation of the exponentially-transformed value function. I will then focus on function approximation methods that exploit the linearity of the Bellman equation, and illustrate how such methods scale to high-dimensional continuous dynamical systems. Computing the weights for a fixed set of basis functions can be done very efficiently by solving a large but sparse linear problem. This enables us to work with hundreds of millions of (localized) bases. Still, the volume of a high-dimensional state space is too large to be filled with localized bases, forcing us to consider adaptive methods for positioning and shaping those bases. Several such methods will be compared.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.