Deep Visible and Thermal Image Fusion with Cross-Modality Feature Selection for Pedestrian Detection - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2021

Deep Visible and Thermal Image Fusion with Cross-Modality Feature Selection for Pedestrian Detection

Résumé

This paper proposes a deep RGB and thermal image fusion method for pedestrian detection. A two-branch structure is designed to learn the features of RGB and thermal images respectively, and these features are fused with a cross-modality feature selection module for detection. It includes the following stages. First, we learn features from paired RGB and thermal images through a backbone network with a residual structure, and add a feature squeeze-excitation module to the residual structure; Then we fuse the learned features from two branches, and a cross-modality feature selection module is designed to strengthen the effective information and compress the useless information during the fusion process; Finally, multi-scale features are fused for pedestrian detection. Two sets of experiments on the public KAIST pedestrian dataset are conducted, and experimental results show that our method is better than the state-of-the-art methods. The robustness of fused features is improved, and the miss rate is reduced obviously.
Fichier principal
Vignette du fichier
511910_1_En_10_Chapter.pdf (617.52 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03768739 , version 1 (04-09-2022)

Licence

Identifiants

Citer

Mingyue Li, Zhenzhou Shao, Zhiping Shi, Yong Guan. Deep Visible and Thermal Image Fusion with Cross-Modality Feature Selection for Pedestrian Detection. 17th IFIP International Conference on Network and Parallel Computing (NPC), Sep 2020, Zhengzhou, China. pp.117-127, ⟨10.1007/978-3-030-79478-1_10⟩. ⟨hal-03768739⟩
27 Consultations
20 Téléchargements

Altmetric

Partager

More