Towards understanding the robustness against evasion attack on categorical inputs - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2022

Towards understanding the robustness against evasion attack on categorical inputs

Abstract

Characterizing and assessing the adversarial risk of a classifier with categorical inputs has been a practically important yet rarely explored research problem. Conventional wisdom attributes the difficulty of solving the problem to its combinatorial nature. Previous research efforts tackling this problem are specific to use cases and heavily depend on domain knowledge. Such limitations prevent their general applicability in real-world applications with categorical data. Our study novelly shows that provably optimal adversarial robustness assessment is computationally feasible for any classifier with a mild smoothness constraint. We theoretically analyze the impact factors of adversarial vulnerability of a classifier with categorical inputs via an information-theoretic adversarial risk analysis. Corroborating these theoretical findings with a substantial experimental study over various real-world categorical datasets, we can empirically assess the impact of the key adversarial risk factors over a targeted learning system with categorical inputs.
Fichier principal
Vignette du fichier
3912_towards_understanding_the_robu.pdf (482.97 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03893480 , version 1 (11-12-2022)

Identifiers

  • HAL Id : hal-03893480 , version 1

Cite

Hongyan Bao, Yufei Han, Yujun Zhou, Yun Shen, Xiangliang Zhang. Towards understanding the robustness against evasion attack on categorical inputs. ICLR 2022 - 10th International Conference on Learning Representations, Apr 2022, Virtual Event, France. ⟨hal-03893480⟩
70 View
69 Download

Share

Gmail Facebook X LinkedIn More