Deep Active Learning from Multispectral Data Through Cross-Modality Prediction Inconsistency - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2021

Deep Active Learning from Multispectral Data Through Cross-Modality Prediction Inconsistency

Résumé

Data from multiple sensors provide independent and complementary information, which may improve the robustness and reliability of scene analysis applications. While there exist many large-scale labelled benchmarks acquired by a single sensor, collecting labelled multi-sensor data is more expensive and time-consuming. In this work, we explore the construction of an accurate multispectral (here, visible & thermal cameras) scene analysis system with minimal annotation efforts via an active learning strategy based on the cross-modality prediction inconsistency. Experiments on multiple multispectral datasets and vision tasks demonstrate the effectiveness of our method. In particular, with only 10% of labelled data on KAIST multispectral pedestrian detection dataset, we obtain comparable performance as other fully supervised State-of-the-Art methods.
Fichier principal
Vignette du fichier
icip2021.pdf (5.92 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03236409 , version 1 (26-05-2021)

Identifiants

Citer

Heng Zhang, Elisa Fromont, Sébastien Lefevre, Bruno Avignon. Deep Active Learning from Multispectral Data Through Cross-Modality Prediction Inconsistency. ICIP 2021 - 28th IEEE International Conference on Image Processing, Sep 2021, Anchorage, United States. pp.1-5, ⟨10.1109/ICIP42928.2021.9506322⟩. ⟨hal-03236409⟩
208 Consultations
267 Téléchargements

Altmetric

Partager

More