Do You Need Embeddings Trained on a Massive Specialized Corpus for Your Clinical Natural Language Processing Task? - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2019

Do You Need Embeddings Trained on a Massive Specialized Corpus for Your Clinical Natural Language Processing Task?

Résumé

We explore the impact of data source on word representations for different NLP tasks in the clinical domain in French (natural language understanding and text classification). We compared word embeddings (Fasttext) and language models (ELMo), learned either on the general domain (Wikipedia) or on specialized data (electronic health records, EHR). The best results were obtained with ELMo representations learned on EHR data for one of the two tasks(+7% and +8% of gain in F1-score).
Fichier non déposé

Dates et versions

hal-03887246 , version 1 (06-12-2022)

Identifiants

Citer

Antoine Neuraz, Vincent Looten, Bastien Rance, Nicolas Daniel, Nicolas Garcelon, et al.. Do You Need Embeddings Trained on a Massive Specialized Corpus for Your Clinical Natural Language Processing Task?. 17th World Congress on Medical and Health Informatics (MEDINFO), MCO Congress Group, Aug 2019, Lyon, France. pp.1558-1559, ⟨10.3233/SHTI190533⟩. ⟨hal-03887246⟩
122 Consultations
0 Téléchargements

Altmetric

Partager

More