Privacy attacks for automatic speech recognition acoustic models in a federated learning framework - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2022

Privacy attacks for automatic speech recognition acoustic models in a federated learning framework

Abstract

This paper investigates methods to effectively retrieve speaker information from the personalized speaker adapted neural network acoustic models (AMs) in automatic speech recognition (ASR). This problem is especially important in the context of federated learning of ASR acoustic models where a global model is learnt on the server based on the updates received from multiple clients. We propose an approach to analyze information in neural network AMs based on a neural network footprint on the so-called Indicator dataset. Using this method, we develop two attack models that aim to infer speaker identity from the updated personalized models without access to the actual users' speech data. Experiments on the TED-LIUM 3 corpus demonstrate that the proposed approaches are very effective and can provide equal error rate (EER) of 1-2%.
Fichier principal
Vignette du fichier
FL_icassp2022_c.pdf (610.16 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03539742 , version 1 (22-01-2022)
hal-03539742 , version 2 (24-01-2022)

Identifiers

  • HAL Id : hal-03539742 , version 2

Cite

Natalia Tomashenko, Salima Mdhaffar, Marc Tommasi, Yannick Estève, Jean-François Bonastre. Privacy attacks for automatic speech recognition acoustic models in a federated learning framework. ICASSP 2022, 2022, Singapour, Singapore. ⟨hal-03539742v2⟩
133 View
128 Download

Share

Gmail Facebook X LinkedIn More