Deep Neural Network Attacks and Defense: The Case of Image Classification - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Chapitre D'ouvrage Année : 2022

Deep Neural Network Attacks and Defense: The Case of Image Classification

Résumé

Machine learning using deep neural networks applied to image recognition works extremely well. However, it is possible to modify the images very slightly and intentionally, with modifications almost invisible to the eye, to deceive the classification system into misclassifying such content into the incorrect visual category. Deep neural networks have made it possible to automatically recognize the visual content of images. White box attacks consider the scenario where the attacker knows everything about the classifier network. This chapter provides a quick overview of techniques used to produce adversarial images. It schematically distinguishes three families of defenses: reactive techniques, proactive techniques and obfuscation techniques. Another viable view distinguishes whether the defense is an add-on module connected to the network, or whether the defense is an integral part of the network resulting in a radical transformation of the classifier.
Fichier non déposé

Dates et versions

hal-03852749 , version 1 (15-11-2022)

Identifiants

Citer

Hanwei Zhang, Teddy Furon, Laurent Amsaleg, Yannis Avrithis. Deep Neural Network Attacks and Defense: The Case of Image Classification. Multimedia Security 1, 1, Wiley, 2022, Multimedia Security 1: Authentication and Data Hiding, 9781789450262. ⟨10.1002/9781119901808.ch2⟩. ⟨hal-03852749⟩
35 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More