%0 Journal Article %T Preference-based reinforcement learning: evolutionary direct policy search using a preference-based racing algorithm %+ MTA-SZTE Research Group on Artificial Intelligence %+ Fachbereich Mathematik und Informatik [Marburg] [Dept. of Math and Computer Science] %+ Sequential Learning (SEQUEL) %+ DECISION %A Busa-Fekete, Róbert %A Szörényi, Balázs %A Weng, Paul %A Cheng, Weiwei %A Hüllermeier, Eyke %< avec comité de lecture %@ 0885-6125 %J Machine Learning %I Springer Verlag %V 97 %N 3 %P 327-351 %8 2014-12-01 %D 2014 %R 10.1007/s10994-014-5458-8 %Z Statistics [stat]/Machine Learning [stat.ML]Journal articles %X We introduce a novel approach to preference-based reinforcement learn-ing, namely a preference-based variant of a direct policy search method based on evolutionary optimization. The core of our approach is a preference-based racing algorithm that selects the best among a given set of candidate policies with high probability. To this end, the algorithm operates on a suitable ordinal preference structure and only uses pairwise comparisons between sample rollouts of the policies. Embedding the racing algorithm in a rank-based evolutionary search procedure, we show that approxima-tions of the so-called Smith set of optimal policies can be produced with certain theoretical guarantees. Apart from a formal performance and complexity analysis, we present first experimental studies showing that our approach performs well in practice. %G English %2 https://inria.hal.science/hal-01079370/document %2 https://inria.hal.science/hal-01079370/file/revised_1_1.pdf %L hal-01079370 %U https://inria.hal.science/hal-01079370 %~ UPMC %~ UNIV-LILLE3 %~ CNRS %~ INRIA %~ INRIA-LILLE %~ LAGIS %~ INRIA_TEST %~ TESTALAIN1 %~ LIP6 %~ INRIA2 %~ UPMC_POLE_1 %~ SORBONNE-UNIVERSITE %~ SU-SCIENCES %~ SU-TI %~ ALLIANCE-SU %~ INRIA-ALLEMAGNE