Privacy leakages on NLP models and mitigations through a use case on medical data - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2023

Privacy leakages on NLP models and mitigations through a use case on medical data

Abstract

Patient medical data is extremely sensitive and private, and thus subject to numerous regulations which require anonymization before disseminating the data. The anonymization of medical documents is a complex task but the recent advances in NLP models have shown encouraging results. Nevertheless, privacy risks associated with NLP models may still remain. In this paper, we present the main privacy concerns in NLP and a case study conducted in collaboration with the Hospices Civils de Lyon (HCL) to exploit NLP models to anonymize medical data.
Fichier principal
Vignette du fichier
compas2023.pdf (811.64 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04138528 , version 1 (23-06-2023)

Licence

Attribution

Identifiers

  • HAL Id : hal-04138528 , version 1

Cite

Gaspard Berthelier, Antoine Boutet, Antoine Richard. Privacy leakages on NLP models and mitigations through a use case on medical data. COMPAS 2023 - Conférence francophone d'informatique en Parallélisme, Architecture et Système, LISTIC - Laboratoire d’Informatique, Systèmes, Traitement de l’Information et de la Connaissance / USBM - Université Savoie Mont Blanc, Jul 2023, Annecy, France. pp.1-8. ⟨hal-04138528⟩
106 View
72 Download

Share

Gmail Facebook X LinkedIn More