Exploration in Model-based Reinforcement Learning by Empirically Estimating Learning Progress - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2012

Exploration in Model-based Reinforcement Learning by Empirically Estimating Learning Progress

Résumé

Formal exploration approaches in model-based reinforcement learning estimate the accuracy of the currently learned model without consideration of the empirical prediction error. For example, PAC-MDP approaches such as R-MAX base their model certainty on the amount of collected data, while Bayesian approaches assume a prior over the transition dynamics. We propose extensions to such approaches which drive exploration solely based on empirical estimates of the learner's accuracy and learning progress. We provide a "sanity check" theoretical analysis, discussing the behavior of our extensions in the standard stationary finite state-action case. We then provide experimental studies demonstrating the robustness of these exploration measures in cases of non-stationary environments or where original approaches are misled by wrong domain assumptions.
Fichier principal
Vignette du fichier
nips.pdf (222.53 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00755248 , version 1 (20-11-2012)

Identifiants

  • HAL Id : hal-00755248 , version 1

Citer

Manuel Lopes, Tobias Lang, Marc Toussaint, Pierre-Yves Oudeyer. Exploration in Model-based Reinforcement Learning by Empirically Estimating Learning Progress. Neural Information Processing Systems (NIPS), Dec 2012, Lake Tahoe, United States. ⟨hal-00755248⟩
509 Consultations
425 Téléchargements

Partager

More