A Theory of Regularized Markov Decision Processes - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year :

A Theory of Regularized Markov Decision Processes

Matthieu Geist
Olivier Pietquin

Abstract

Many recent successful (deep) reinforcement learning algorithms make use of regularization, generally based on entropy or Kullback-Leibler divergence. We propose a general theory of regularized Markov Decision Processes that generalizes these approaches in two directions: we consider a larger class of regularizers, and we consider the general modified policy iteration approach, encompassing both policy iteration and value iteration. The core building blocks of this theory are a notion of regularized Bellman operator and the Legendre-Fenchel transform, a classical tool of convex optimization. This approach allows for error propagation analyses of general algorithmic schemes of which (possibly variants of) classical algorithms such as Trust Region Policy Optimization, Soft Q-learning, Stochastic Actor Critic or Dynamic Policy Programming are special cases. This also draws connections to proximal convex optimization, especially to Mirror Descent.

Dates and versions

hal-02273741 , version 1 (29-08-2019)

Identifiers

Cite

Matthieu Geist, Bruno Scherrer, Olivier Pietquin. A Theory of Regularized Markov Decision Processes. ICML 2019 - Thirty-sixth International Conference on Machine Learning, Jun 2019, Long Island, United States. ⟨hal-02273741⟩
78 View
0 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More