An open framework for human-like autonomous driving using Inverse Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2014

An open framework for human-like autonomous driving using Inverse Reinforcement Learning

Résumé

—Research on autonomous car driving and advanced driving assistance systems has come to occupy a very significant place in robotics research. On the other hand, there are significant entry barriers (eg cost, legislation, logistics) that make it very difficult for small research groups and individual researchers to have access to a real autonomous vehicle for their experiments. This paper proposes to leverage an existing driving simulator (Torcs) by developing a ROS communication bridge for it. We use is as the basis for an experimental framework for the development and evaluation of Human-like autonomous driving based on Inverse Reinforce Learning (IRL). Based on an extensible and open architecture, this framework provides efficient GPU-based implementations of state-of the art IRL algorithms, as well as two challenging test environments and a set of evaluation metrics as a first step toward a benchmark.
Fichier principal
Vignette du fichier
vppc14.pdf (1.11 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01105271 , version 1 (20-01-2015)

Identifiants

  • HAL Id : hal-01105271 , version 1

Citer

Dizan Vasquez, Yufeng Yu, Suryansh Kumar, Christian Laugier. An open framework for human-like autonomous driving using Inverse Reinforcement Learning. IEEE Vehicle Power and Propulsion Conference, 2014, Coimbra, Portugal. ⟨hal-01105271⟩
779 Consultations
1341 Téléchargements

Partager

More