Reinforcement Learning with a Near Optimal Rate of Convergence - Inria - Institut national de recherche en sciences et technologies du numérique
Rapport (Rapport Technique) Année : 2011

Reinforcement Learning with a Near Optimal Rate of Convergence

Résumé

We consider the problem of model-free reinforcement learning in the Markovian decision processes (MDP) under the PAC ("probably approximately correct") model. We introduce a new variant of Q-learning, called speedy Q-learning (SQL), to address the problem of the slow convergence in the standard Q-learning algorithm, and prove PAC bounds on the performance of SQL. The bounds show that for any MDP with n state-action pairs and the discount factor \gamma \in [0, 1) a total of O(n \log(n/\delta)/((1 − \gamma)^4\epsilon^2)) step suffices for the SQL algorithm to converge to an \epsilon-optimal action-value function with probability 1 − \delta. We also establish a lower-bound of \Omega(n \log(1/\delta)/((1 − \gamma)^2\epsilon^2)) for all reinforcement learning algorithms, which matches the upper bound in terms of \epsilon, \delta and n (up to a logarithmic factor). Further, our results have better dependencies on \epsilon and 1 −\gamma, and thus, are tighter than the best available results for Q-learning. SQL also improves on existing results for the batch Q-value iteration, so far considered to be more efficient than the incremental methods like Q-learning.

Domaines

Informatique
Fichier principal
Vignette du fichier
SQL_vNov19.pdf (338.72 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00636615 , version 1 (27-10-2011)
inria-00636615 , version 2 (29-11-2011)

Identifiants

  • HAL Id : inria-00636615 , version 2

Citer

Mohammad Gheshlaghi Azar, Rémi Munos, Mohammad Ghavamzadeh, Hilbert Kappen. Reinforcement Learning with a Near Optimal Rate of Convergence. [Technical Report] 2011. ⟨inria-00636615v2⟩
568 Consultations
1967 Téléchargements

Partager

More