Fitted Q-iteration in continuous action-space MDPs - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2007

Fitted Q-iteration in continuous action-space MDPs

Abstract

We consider continuous state, continuous action batch reinforcement learning where the goal is to learn a good policy from a sufficiently rich trajectory generated by some policy. We study a variant of fitted Q-iteration, where the greedy action selection is replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values. We provide a rigorous analysis of this algorithm, proving what we believe is the first finite-time bound for value-function based algorithms for continuous state and action problems.
Fichier principal
Vignette du fichier
rlca.pdf (129.85 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

inria-00203359 , version 1 (09-01-2008)

Identifiers

  • HAL Id : inria-00203359 , version 1

Cite

Andras Antos, Rémi Munos, Csaba Szepesvari. Fitted Q-iteration in continuous action-space MDPs. Neural Information Processing Systems, 2007, Vancouver, Canada. ⟨inria-00203359⟩
227 View
252 Download

Share

Gmail Facebook X LinkedIn More