Policy Gradient in Continuous Time - Inria - Institut national de recherche en sciences et technologies du numérique
Article Dans Une Revue Journal of Machine Learning Research Année : 2006

Policy Gradient in Continuous Time

Résumé

Policy search is a method for approximately solving an optimal control problem by performing a parametric optimization search in a given class of parameterized policies. In order to process a local optimization technique, such as a gradient method, we wish to evaluate the sensitivity of the performance measure with respect to the policy parameters, the so-called policy gradient. This paper is concerned with the estimation of the policy gradient for continuous-time, deterministic state dynamics, in a reinforcement learning framework, that is, when the decision maker does not have a model of the state dynamics. We show that usual likelihood ratio methods used in discrete-time, fail to proceed the gradient because they are subject to variance explosion when the discretization time-step decreases to 0. We describe an alternative approach based on the approximation of the pathwise derivative, which leads to a policy gradient estimate that converges almost surely to the true gradient when the time-step tends to 0. The underlying idea starts with the derivation of an explicit representation of the policy gradient using pathwise derivation. This derivation makes use of the knowledge of the state dynamics. Then, in order to estimate the gradient from the observable data only, we use a stochastic policy to discretize the continuous deterministic system into a stochastic discrete process, which enables to replace the unknown coefficients by quantities that solely depend on known data. We prove the almost sure convergence of this estimate to the true policy gradient when the discretization time-step goes to zero. The method is illustrated on two target problems, in discrete and continuous control spaces.
Fichier principal
Vignette du fichier
munos06b.pdf (207.06 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte
Loading...

Dates et versions

inria-00117152 , version 1 (30-11-2006)

Identifiants

  • HAL Id : inria-00117152 , version 1

Citer

Rémi Munos. Policy Gradient in Continuous Time. Journal of Machine Learning Research, 2006, 7, pp.771-791. ⟨inria-00117152⟩
184 Consultations
240 Téléchargements

Partager

More