Towards understanding the robustness against evasion attack on categorical inputs - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Towards understanding the robustness against evasion attack on categorical inputs

Résumé

Characterizing and assessing the adversarial risk of a classifier with categorical inputs has been a practically important yet rarely explored research problem. Conventional wisdom attributes the difficulty of solving the problem to its combinatorial nature. Previous research efforts tackling this problem are specific to use cases and heavily depend on domain knowledge. Such limitations prevent their general applicability in real-world applications with categorical data. Our study novelly shows that provably optimal adversarial robustness assessment is computationally feasible for any classifier with a mild smoothness constraint. We theoretically analyze the impact factors of adversarial vulnerability of a classifier with categorical inputs via an information-theoretic adversarial risk analysis. Corroborating these theoretical findings with a substantial experimental study over various real-world categorical datasets, we can empirically assess the impact of the key adversarial risk factors over a targeted learning system with categorical inputs.
Fichier principal
Vignette du fichier
3912_towards_understanding_the_robu.pdf (482.97 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03893480 , version 1 (11-12-2022)

Identifiants

  • HAL Id : hal-03893480 , version 1

Citer

Hongyan Bao, Yufei Han, Yujun Zhou, Yun Shen, Xiangliang Zhang. Towards understanding the robustness against evasion attack on categorical inputs. ICLR 2022 - 10th International Conference on Learning Representations, Apr 2022, Virtual Event, France. ⟨hal-03893480⟩
64 Consultations
65 Téléchargements

Partager

Gmail Facebook X LinkedIn More