Bayesian Multi-Task Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2010

Bayesian Multi-Task Reinforcement Learning

Mohammad Ghavamzadeh
  • Fonction : Auteur
  • PersonId : 868946

Résumé

We consider the problem of multi-task reinforcement learning where the learner is provided with a set of tasks, for which only a small number of samples can be generated for any given policy. As the number of samples may not be enough to learn an accurate evaluation of the policy, it would be necessary to identify classes of tasks with similar structure and to learn them jointly. We consider the case where the tasks share structure in their value functions, and model this by assuming that the value functions are all sampled from a common prior. We adopt the Gaussian process temporal-difference value function model and use a hierarchical Bayesian approach to model the distribution over the value functions. We study two cases, where all the value functions belong to the same class and where they belong to an undefined number of classes. For each case, we present a hierarchical Bayesian model, and derive inference algorithms for (i) joint learning of the value functions, and (ii) efficient transfer of the information gained in (i) to assist learning the value function of a newly observed task.

Domaines

Informatique
Fichier principal
Vignette du fichier
bmtl.pdf (858.14 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00475214 , version 1 (21-04-2010)

Identifiants

  • HAL Id : inria-00475214 , version 1

Citer

Alessandro Lazaric, Mohammad Ghavamzadeh. Bayesian Multi-Task Reinforcement Learning. ICML - 27th International Conference on Machine Learning, Jun 2010, Haifa, Israel. pp.599-606. ⟨inria-00475214⟩
1224 Consultations
2058 Téléchargements

Partager

More