Analysis of the Impact of White Box Adversarial Attacks in ResNet While Classifying Retinal Fundus Images
Résumé
Medical image analysis with deep learning techniques has been widely recognized to provide support in medical diagnosis. Among the several attacks on the deep learning (DL) models that aim to decrease the reliability of the models, this paper deals with the adversarial attacks. Adversarial attacks and the ways to defend the attacks or make the DL models robust towards these attacks have been an increasingly important research topic with a surge of work carried out on both sides. The adversarial attacks of the white box category, namely Fast Gradient Sign Method (FGSM), the Box-constrained Limited Memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS-B) attack and a variant of the L-BFGS-B attack are studied in this paper. In this work, we have used two defense mechanisms, namely, Adversarial Training and Defensive distillation-Gradient masking. The reliability of these defense mechanisms against the attacks are studied. The effect of noise in FGSM is studied in detail. Retinal fundus images for the diabetic retinopathy disease are used in the experimentation. The effect of the attack reveals the vulnerability of the Resnet model for these attacks.