Domain adaptation for cross-sensor 3D object detection on point-clouds
Résumé
The field of self-driving cars is developing tirelessly, attracting many technology companies to research this area: Google, Apple, Yandex, Tesla and many others. One of the most important and basic problems in the field of autonomous driving is 3D object detection the goal of which is to localize and classify objects utilizing data from different sensors.
To solve the 3D object detection problem we face two main issues: firstly, there is very little amount of labelled relatively to unlabelled data; and secondly, existing datasets are captured with platforms that have different sensor setups (usually LiDAR sensors with different resolution). Hence, there arises the idea to use a model, pre-trained on the available data set, in practical self-driving experiments, despite different sensor settings.
In this thesis we address, firstly, the problem of deep neural network’s portability pre-trained on data from LiDAR with one setup to data from LiDAR with another setup; we do this on example of 3D object detection problem and establish that the problem does exist. Secondly, we conduct experiments on scene-wise cross-sensor domain adaptation altering the pre-trained model’s weights in-place applying several domain adaptation approaches — adversarial training and maximum mean discrepancy.
Origine | Fichiers produits par l'(les) auteur(s) |
---|