Material Detail

Finite horizon exploration for path integral control problems

Finite horizon exploration for path integral control problems

This video was recorded at NIPS Workshop on On-line Trading of Exploration and Exploitation, Whistler 2006. We have recently developed a path integral method for solving a class of non-linear stochastic control problems in the continuous domain [1, 2]. Path integral (PI) control can be applied for timedependent finite-horizon tasks (motor control, coordination between agents) and static tasks (which behave similar to discounted reward reinforcement learning). In this control formalism, the cost-togo or value function can be solved explicitly as a function of the environment and rewards (as a path integral). Thus, for PI control one does not need to solve the Bellman equation. The computation of the path integral can also be complex, but one can use methods and concepts from statistical physics, such as Monte Carlo sampling or the Laplace approximation to obtain efficient approximations. One can also generalize this control formalism to multiple agents that jointly solve a task. In this case the agents need to coordinate their actions not only through time, but also among each other. It was recently shown that the problem can be mapped on a graphical model inference problem and can be solved using the junction tree algorithm. Exact control solutions can be computed for instance with hundreds of agents, depending on the complexity of the cost function [3].

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.