Momentum in Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2020

Momentum in Reinforcement Learning

Olivier Pietquin
Matthieu Geist

Abstract

We adapt the optimization's concept of momentum to reinforcement learning. Seeing the state-action value functions as an analog to the gradients in optimization, we interpret momentum as an average of consecutive q-functions. We derive Momentum Value Iteration (MoVI), a variation of Value iteration that incorporates this momentum idea. Our analysis shows that this allows MoVI to average errors over successive iterations. We show that the proposed approach can be readily extended to deep learning. Specifically,we propose a simple improvement on DQN based on MoVI, and experiment it on Atari games.
Fichier principal
Vignette du fichier
vieillard20a-supp.pdf (5.38 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03137343 , version 1 (10-02-2021)

Identifiers

  • HAL Id : hal-03137343 , version 1

Cite

Nino Vieillard, Bruno Scherrer, Olivier Pietquin, Matthieu Geist. Momentum in Reinforcement Learning. AISTATS 2020 - 23rd International Conference on Artificial Intelligence and Statistics, Aug 2020, Palermo / Virtual, Italy. ⟨hal-03137343⟩
48 View
76 Download

Share

Gmail Facebook Twitter LinkedIn More