A Novel Model-Based Attribute Inference Attack in Federated Learning - Inria - Institut national de recherche en sciences et technologies du numérique
Conference Papers Year : 2022

A Novel Model-Based Attribute Inference Attack in Federated Learning

Abstract

In federated learning, clients such as mobile devices or data silos (e.g. hospitals and banks) collaboratively improve a shared model, while maintaining their data locally. Multiple recent works show that client's private information can still be disclosed to an adversary who just eavesdrops the messages exchanged between the targeted client and the server. In this paper, we propose a novel model-based attribute inference attack in federated learning which overcomes the limits of gradient-based ones. Furthermore, we provide an analytical lower-bound for the success of this attack. Empirical results using real world datasets confirm that our attribute inference attack works well for both regression and classification tasks. Moreover, we benchmark our novel attribute inference attack against the state-ofthe-art attacks in federated learning. Our attack results in higher reconstruction accuracy especially when the clients' datasets are heterogeneous (as it is common in federated learning). Most importantly, our model-based fashion of designing powerful and explainable attacks enables an effective quantification of the privacy risk in FL.
Fichier principal
Vignette du fichier
FL_NeuIPS22.pdf (423.67 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03894598 , version 1 (12-12-2022)

Identifiers

  • HAL Id : hal-03894598 , version 1

Cite

Ilias Driouich, Chuan Xu, Giovanni Neglia, Frederic Giroire, Eoin Thomas. A Novel Model-Based Attribute Inference Attack in Federated Learning. FL-NeurIPS'22 - Federated Learning: Recent Advances and New Challenges workshop in Conjunction with NeurIPS 2022, Dec 2022, New orleans, United States. ⟨hal-03894598⟩
149 View
184 Download

Share

More