Active Learning of MDP Models - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2011

Active Learning of MDP Models

Mauricio Araya-López
  • Fonction : Auteur
  • PersonId : 881106
Olivier Buffet
Vincent Thomas

Résumé

We consider the active learning problem of inferring the transition model of a Markov Decision Process by acting and observ- ing transitions. This is particularly useful when no reward function is a priori defined. Our proposal is to cast the active learning task as a utility maximization problem using Bayesian reinforcement learning with belief-dependent rewards. After presenting three possible performance criteria, we derive from them the belief-dependent rewards to be used in the decision-making process. As computing the optimal Bayesian value function is intractable for large horizons, we use a simple algorithm to approximately solve this optimization problem. Despite the sub-optimality of this technique, we show experimentally that our proposal is efficient in a number of domains.
Fichier principal
Vignette du fichier
EWRL-article.pdf (189.52 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00642909 , version 1 (19-11-2011)

Identifiants

  • HAL Id : hal-00642909 , version 1

Citer

Mauricio Araya-López, Olivier Buffet, Vincent Thomas, François Charpillet. Active Learning of MDP Models. European Workshop On Reinforcement Learning, Sep 2011, Athène, Greece. ⟨hal-00642909⟩
278 Consultations
292 Téléchargements

Partager

More