Escaping Backdoor Attack Detection of Deep Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Escaping Backdoor Attack Detection of Deep Learning

Résumé

Malicious attacks become a top concern in the field of deep learning (DL) because they have kept threatening the security and safety of applications where DL models are deployed. The backdoor attack, an emerging one among these malicious attacks, attracts a lot of research attentions in detecting it because of its severe consequences. Latest backdoor detections have made great progress by reconstructing backdoor triggers and performing the corresponding outlier detection. Although they are effective on existing triggers, they still fall short of detecting stealthy ones which are proposed in this work. New triggers of our backdoor attack can be generally inserted into DL models through a hidden and reconstruction-resistant manner. We evaluate our attack against two state-of-the-art detections on three different data sets, and demonstrate that our attack is able to successfully insert target backdoors and also escape the detections. We hope our design is able to shed some light on how the backdoor detection should be advanced along this line in future.
Fichier principal
Vignette du fichier
497034_1_En_29_Chapter.pdf (1.26 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03440830 , version 1 (22-11-2021)

Licence

Paternité

Identifiants

Citer

Yayuan Xiong, Fengyuan Xu, Sheng Zhong, Qun Li. Escaping Backdoor Attack Detection of Deep Learning. 35th IFIP International Conference on ICT Systems Security and Privacy Protection (SEC), Sep 2020, Maribor, Slovenia. pp.431-445, ⟨10.1007/978-3-030-58201-2_29⟩. ⟨hal-03440830⟩
35 Consultations
16 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More