Practical Collaborative Perception: A Framework for Asynchronous and Multi-Agent 3D Object Detection - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue IEEE Transactions on Intelligent Transportation Systems Année : 2024

Practical Collaborative Perception: A Framework for Asynchronous and Multi-Agent 3D Object Detection

Résumé

Occlusion is a major challenge for LiDAR-based object detection methods as it renders regions of interest unobservable to the ego vehicle. A proposed solution to this problem comes from collaborative perception via Vehicle-to-Everything (V2X) communication, which leverages a diverse perspective thanks to the presence of connected agents (vehicles and intelligent roadside units) at multiple locations to form a complete scene representation. The major challenge of V2X collaboration is the performance-bandwidth tradeoff which presents two questions (i) which information should be exchanged over the V2X network, and (ii) how the exchanged information is fused. The current state-of-the-art resolves to the mid-collaboration approach where Birds-Eye View (BEV) images of point clouds are communicated to enable a deep interaction among connected agents while reducing bandwidth consumption. While achieving strong performance, the real-world deployment of most mid-collaboration approaches are hindered by their overly complicated architectures and unrealistic assumptions about inter-agent synchronization. In this work, we devise a simple yet effective collaboration method based on exchanging the outputs from each agent that achieves a better bandwidthperformance tradeoff while minimising the required changes to the single-vehicle detection models. Moreover, we relax the assumptions used in existing state-of-the-art approaches about inter-agent synchronization to only require a common time reference among connected agents, which can be achieved in practice using GPS time. Experiments on the V2X-Sim dataset show that our collaboration method reaches 76.72 mean average precision which is 99% the performance of the early collaboration method while consuming as much bandwidth as the late collaboration (0.01 MB on average). The code will be released in https://github.com/quan-dao/practical-collab-perception.
Fichier principal
Vignette du fichier
2307.01462.pdf (4.22 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Licence : CC BY NC ND - Paternité - Pas d'utilisation commerciale - Pas de modification

Dates et versions

hal-04438377 , version 1 (05-02-2024)

Licence

Paternité - Pas d'utilisation commerciale - Pas de modification

Identifiants

Citer

Minh-Quan Dao, Julie Stephany Berrio, Vincent Frémont, Mao Shan, Elwan Héry, et al.. Practical Collaborative Perception: A Framework for Asynchronous and Multi-Agent 3D Object Detection. IEEE Transactions on Intelligent Transportation Systems, inPress, pp.1-13. ⟨10.1109/TITS.2024.3371177⟩. ⟨hal-04438377⟩
32 Consultations
7 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More