Policy Search: Any Local Optimum Enjoys a Global Performance Guarantee - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2013

Policy Search: Any Local Optimum Enjoys a Global Performance Guarantee

Bruno Scherrer
Matthieu Geist

Résumé

Local Policy Search is a popular reinforcement learning approach for handling large state spaces. Formally, it searches locally in a paramet erized policy space in order to maximize the associated value function averaged over some predefined distribution. It is probably commonly b elieved that the best one can hope in general from such an approach is to get a local optimum of this criterion. In this article, we show th e following surprising result: \emph{any} (approximate) \emph{local optimum} enjoys a \emph{global performance guarantee}. We compare this g uarantee with the one that is satisfied by Direct Policy Iteration, an approximate dynamic programming algorithm that does some form of Poli cy Search: if the approximation error of Local Policy Search may generally be bigger (because local search requires to consider a space of s tochastic policies), we argue that the concentrability coefficient that appears in the performance bound is much nicer. Finally, we discuss several practical and theoretical consequences of our analysis.
Fichier principal
Vignette du fichier
report.pdf (206.26 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00829548 , version 1 (06-06-2013)

Identifiants

Citer

Bruno Scherrer, Matthieu Geist. Policy Search: Any Local Optimum Enjoys a Global Performance Guarantee. 2013. ⟨hal-00829548⟩
273 Consultations
133 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More