%0 Conference Proceedings %T Basis Function Construction in Reinforcement Learning using Cascade-Correlation Learning Architecture %+ Sequential Learning (SEQUEL) %+ Groupe de Recherche en Apprentissage Automatique (GRAppA - LIFL) %+ Laboratoire d'Informatique Fondamentale de Lille (LIFL) %A Girgin, Sertan %A Preux, Philippe %< avec comité de lecture %B International Conference on Machine Learning and Applications %C San Diego, United States %I IEEE Press %3 Proceedings of the International Conference on Machine Learning and Applications (ICML-A) %P 75-82 %8 2008-12 %D 2008 %Z Computer Science [cs]/Machine Learning [cs.LG]Conference papers %X In reinforcement learning, it is a common practice to map the state(-action) space to a different one using ba- sis functions. This transformation aims to represent the input data in a more informative form that facilitates and improves subsequent steps. As a "good" set of basis func- tions result in better solutions and defining such functions becomes a challenge with increasing problem complexity, it is beneficial to be able to generate them automatically. In this paper, we propose a new approach based on Bellman residual for constructing basis functions using cascade- correlation learning architecture. We show how this ap- proach can be applied to Least Squares Policy Iteration al- gorithm in order to obtain a better approximation of the value function, and consequently improve the performance of the resulting policies. We also present the effectiveness of the method empirically on some benchmark problems. %G English %2 https://inria.hal.science/hal-00826054/document %2 https://inria.hal.science/hal-00826054/file/icmla08.pdf %L hal-00826054 %U https://inria.hal.science/hal-00826054 %~ UNIV-LILLE3 %~ CNRS %~ INRIA %~ INRIA-LILLE %~ LIFL %~ LAGIS %~ INRIA_TEST %~ TESTALAIN1 %~ INRIA2