Deep Neural Network Attacks and Defense: The Case of Image Classification
Résumé
Machine learning using deep neural networks applied to image recognition works extremely well. However, it is possible to modify the images very slightly and intentionally, with modifications almost invisible to the eye, to deceive the classification system into misclassifying such content into the incorrect visual category. Deep neural networks have made it possible to automatically recognize the visual content of images. White box attacks consider the scenario where the attacker knows everything about the classifier network. This chapter provides a quick overview of techniques used to produce adversarial images. It schematically distinguishes three families of defenses: reactive techniques, proactive techniques and obfuscation techniques. Another viable view distinguishes whether the defense is an add-on module connected to the network, or whether the defense is an integral part of the network resulting in a radical transformation of the classifier.