Inferring Sensitive Attributes from Model Explanations - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2022

Inferring Sensitive Attributes from Model Explanations


Model explanations provide transparency into a trained machine learning model’s blackbox behavior to a model builder. They indicate the influence of different input attributes to its corresponding model prediction. The dependency of explanations on input raises privacy concerns for sensitive user data. However, current literature has limited discussion on privacy risks of model explanations. We focus on the specific privacy risk of attribute inference attack wherein an adversary infers sensitive attributes of an input (e.g., Race and Sex) given its model explanations. We design the first attribute inference attack against model explanations in two threat models where model builder either (a) includes the sensitive attributes in training data and input or (b) censors the sensitive attributes by not including them in the training data and input. We evaluate our proposed attack on four benchmark datasets and four state-of-the-art algorithms. We show that an adversary can successfully infer the value of sensitive attributes from explanations in both the threat models accurately. Moreover, the attack is successful even by exploiting only the explanations corresponding to sensitive attributes. These suggest that our attack is effective against explanations and poses a practical threat to data privacy. On combining the model predictions (an attack surface exploited by prior attacks) with explanations, we note that the attack success does not improve. Additionally, the attack success on exploiting model explanations is better compared to exploiting only model predictions. These suggest that model explanations are a strong attack surface to exploit for an adversary.
Fichier principal
Vignette du fichier
AttInfExplainability (21).pdf (1.15 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03781528 , version 1 (20-09-2022)


  • HAL Id : hal-03781528 , version 1


Vasisht Duddu, Antoine Boutet. Inferring Sensitive Attributes from Model Explanations. CIKM 2022 - 31st ACM International Conference on Information and Knowledge Management, Oct 2022, Atlanta / Hybrid, United States. pp.1-10. ⟨hal-03781528⟩
33 View
54 Download


Gmail Facebook X LinkedIn More