Fitted Q-iteration in continuous action-space MDPs - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2007

Fitted Q-iteration in continuous action-space MDPs

Résumé

We consider continuous state, continuous action batch reinforcement learning where the goal is to learn a good policy from a sufficiently rich trajectory generated by some policy. We study a variant of fitted Q-iteration, where the greedy action selection is replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values. We provide a rigorous analysis of this algorithm, proving what we believe is the first finite-time bound for value-function based algorithms for continuous state and action problems.
Fichier principal
Vignette du fichier
rlca.pdf (129.85 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00203359 , version 1 (09-01-2008)

Identifiants

  • HAL Id : inria-00203359 , version 1

Citer

Andras Antos, Rémi Munos, Csaba Szepesvari. Fitted Q-iteration in continuous action-space MDPs. Neural Information Processing Systems, 2007, Vancouver, Canada. ⟨inria-00203359⟩
233 Consultations
258 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More