
Type of publication:  Incollection 
Entered by:  MN 
Title  Optimal control theory and the linear Bellman Equation 
Bibtex cite ID  
Booktitle  Inference and Learning in Dynamic Models 
Year published  2009 
Publisher  Cambridge University Press 
Keywords  Optimal control theory 
Abstract  Optimal control theory is a mathematical description of how to act optimally
to gain future rewards. In this paper I give an introduction to
deterministic and stochastic control theory; partial observability,
learning and the combined problem of inference and control. Subsequently, I
discuss a new class of nonlinear stochastic
control problems for which the Bellman equation becomes linear in the
control and that can be efficiently solved using a path integral.
In this control formalism the central concept of costtogo becomes a
free energy and methods and concepts from probabilistic graphical
models and statistical physics can be readily applied. I illustrate the
theory with a number of examples. 
Authors  
Editors  
Topics
 =SEE CLASSIFICATION DIFFERENCE FROM OTHERS=
=SEE OWN CLASSIFICATION=

BibTeX  BibTeX 
RIS  RIS 
Attachments  
Total mark:  5 

