Using Formal Conformance Testing to Generate Scenarios for Autonomous Vehicles
Résumé
Simulation, a common practice to evaluate autonomous vehicles, requires to specify realistic scenarios, in particular critical ones, which correspond to corner-case situations occurring rarely and potentially dangerous to reproduce in real environments. Such simulation scenarios may be either generated randomly, or specified manually. Randomly generated scenarios can be easily generated, but their relevance might be difficult to assess, for instance when many slightly different scenarios target one feature. Manually specified scenarios can focus on a given feature, but their design might be difficult and time-consuming, especially to achieve satisfactory coverage. In this work, we propose an automatic approach to generate a large number of relevant critical scenarios for autonomous driving simulators. The approach is based on the generation of behavioural conformance tests from a formal model (specifying the ground truth configuration with the range of vehicle behaviours) and a test purpose (specifying the critical feature to focus on). The obtained abstract test cases cover, by construction, all possible executions exercising a given feature, and can be automatically translated into the inputs of autonomous driving simulators. We illustrate our approach by generating hundreds of behaviour trees for the CARLA simulator for several realistic configurations.
Origine | Fichiers produits par l'(les) auteur(s) |
---|