Convergence of Online and Approximate Multiple-Step Lookahead Policy Iteration - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2018

Convergence of Online and Approximate Multiple-Step Lookahead Policy Iteration

Abstract

Anderson (1965) acceleration is an old and simple method for accelerating the computation of a fixed point. However, as far as we know and quite surprisingly, it has never been applied to dynamic programming or reinforcement learning. In this paper, we explain briefly what Anderson acceleration is and how it can be applied to value iteration, this being supported by preliminary experiments showing a significant speed up of convergence, that we critically discuss. We also discuss how this idea could be applied more generally to (deep) reinforcement learning.
Fichier principal
Vignette du fichier
ewrl_approx_cr_final.pdf (427.91 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-01927977 , version 1 (20-11-2018)

Identifiers

Cite

Yonathan Efroni, Gal Dalal, Bruno Scherrer, Shie Mannor. Convergence of Online and Approximate Multiple-Step Lookahead Policy Iteration. EWRL 2018 - 14th European workshop on Reinforcement Learning, Oct 2018, Lille, France. ⟨hal-01927977⟩
123 View
183 Download

Altmetric

Share

Gmail Facebook X LinkedIn More