Conference Papers Year : 2012

Near-Optimal BRL using Optimistic Local Transitions

Mauricio Araya-López
  • Function : Author
  • PersonId : 881106
Vincent Thomas
Olivier Buffet

Abstract

Model-based Bayesian Reinforcement Learning (BRL) allows a sound formalization of the problem of acting optimally while facing an unknown environment, i.e., avoiding the exploration-exploitation dilemma. However, algorithms explicitly addressing BRL suffer from such a combinatorial explosion that a large body of work relies on heuristic algorithms. This paper introduces bolt, a simple and (almost) deterministic heuristic algorithm for BRL which is optimistic about the transition function. We analyze bolt's sample complexity, and show that under certain parameters, the algorithm is near-optimal in the Bayesian sense with high probability. Then, experimental results highlight the key differences of this method compared to previous work.
Fichier principal
Vignette du fichier
icml12.pdf (270.19 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-00755270 , version 1 (20-11-2012)

Identifiers

  • HAL Id : hal-00755270 , version 1

Cite

Mauricio Araya-López, Vincent Thomas, Olivier Buffet. Near-Optimal BRL using Optimistic Local Transitions. International Conference on Machine Learning - ICML 2012, Jun 2012, Edimburgh, United Kingdom. ⟨hal-00755270⟩
241 View
99 Download

Share

More