D-CIS Publication Database

Publication

Type of publication:Incollection
Entered by:LB
TitleContinuous-State Reinforcement Learning with Fuzzy Approximation
Bibtex cite IDBusoniu-lnai08
Booktitle Adaptive Agents and Multi-Agent Systems III
Series Lecture Notes in Computer Science
Year published 2008
Volume 4865
Pages 27-43
Publisher Springer
Keywords reinforcement learning,approximate reinforcement learning,fuzzy approximation
Abstract
Reinforcement learning (RL) is a widely used learning paradigm for adaptive agents. There exist several convergent and consistent RL algorithms which have been intensively studied. In their original form, these algorithms require that the environment states and agent actions take values in a relatively small discrete set. Fuzzy representations for approximate, model-free RL have been proposed in the literature for the more difficult case where the state-action space is continuous. In this work, we propose a fuzzy approximation architecture similar to those previously used for Q-learning, but we combine it with the model-based Q-value iteration algorithm. We prove that the resulting algorithm converges. We also give a modified, asynchronous variant of the algorithm that converges at least as fast as the original version. An illustrative simulation example is provided.
Authors
Busoniu, Lucian
Ernst, Damien
De Schutter, Bart
Babuška, Robert
Editors
Tuyls, Karl
Nowe, Ann
Guessoum, Zahia
Kudenko, Daniel
Topics
=SEE CLASSIFICATION DIFFERENCE FROM OTHERS=
BibTeXBibTeX
RISRIS
Attachments
 
Total mark: 5