Incremental Basis Function Expansion in Reinforcement Learning using Cascade-Correlation Networks - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2008

Incremental Basis Function Expansion in Reinforcement Learning using Cascade-Correlation Networks

Résumé

In reinforcement learning, it is a common practice to map the state(-action) space to a different one using basis functions. This transformation aims to represent the input data in a more informative form that facilitates and improves subsequent steps. As a ''good'' set of basis functions result in better solutions and defining such functions becomes a challenge with increasing problem complexity, it is beneficial to be able to generate them automatically. In this paper, we propose a new approach based on Bellman residual for constructing basis functions using cascade-correlation learning architecture. We show how this approach can be applied to Least Squares Policy Iteration algorithm in order to obtain a better approximation of the value function, and consequently improve the performance of the resulting policies. We also present the effectiveness of the method empirically on some benchmark problems.
Fichier principal
Vignette du fichier
icmla08.pdf (132.5 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00356262 , version 1 (08-11-2012)

Identifiants

  • HAL Id : inria-00356262 , version 1

Citer

Sertan Girgin, Philippe Preux. Incremental Basis Function Expansion in Reinforcement Learning using Cascade-Correlation Networks. 8th International Conference on Machine Learning and Applications, Dec 2008, San Diego, United States. pp.75-82. ⟨inria-00356262⟩
329 Consultations
325 Téléchargements

Partager

More