A Theory of Regularized Markov Decision Processes - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2019

A Theory of Regularized Markov Decision Processes

Matthieu Geist
Olivier Pietquin

Résumé

Many recent successful (deep) reinforcement learning algorithms make use of regularization, generally based on entropy or Kullback-Leibler divergence. We propose a general theory of regularized Markov Decision Processes that generalizes these approaches in two directions: we consider a larger class of regularizers, and we consider the general modified policy iteration approach, encompassing both policy iteration and value iteration. The core building blocks of this theory are a notion of regularized Bellman operator and the Legendre-Fenchel transform, a classical tool of convex optimization. This approach allows for error propagation analyses of general algorithmic schemes of which (possibly variants of) classical algorithms such as Trust Region Policy Optimization, Soft Q-learning, Stochastic Actor Critic or Dynamic Policy Programming are special cases. This also draws connections to proximal convex optimization, especially to Mirror Descent.

Dates et versions

hal-02273741 , version 1 (29-08-2019)

Identifiants

Citer

Matthieu Geist, Bruno Scherrer, Olivier Pietquin. A Theory of Regularized Markov Decision Processes. ICML 2019 - Thirty-sixth International Conference on Machine Learning, Jun 2019, Long Island, United States. ⟨hal-02273741⟩
155 Consultations
0 Téléchargements

Altmetric

Partager

More