Phenotypes Extraction from Text: Analysis and Perspective in the LLM Era
Résumé
Collecting the relevant list of patient phenotypes,
known as deep phenotyping, can significantly improve the final
diagnosis. As textual clinical reports are the richest source of
phenotypes information, their automatic extraction is a critical
task. The main challenges of this Information Extraction (IE) task
are to identify precisely the text spans related to a phenotype and
to link them unequivocally to referenced entities from a source
such as the Human Phenotype Ontology (HPO).
Recently, Language Models (LMs) have been the most suc-
cessful approach for extracting phenotypes from clinical reports.
Solutions such as PhenoBERT, relying on BERT or GPT, have
shown promising results when applied to datasets built on the
hypothesis that most phenotypes are explicitly mentioned in the
text. However, this assumption is not always true in medical
genetics. Hence, although the LMs carry powerful semantic
abilities, their contributions are not clear compared to syntactic
string-matching steps that are used within the current pipelines.
The goal of this study is to improve phenotype extraction from
clinical notes related to genetic diseases. Our contributions are
threefold: First, we provide a clear definition of the phenotype
extraction task from free text, along with a high-level overview of
the involved functions. Second, we conduct an in-depth analysis
of PhenoBERT, one of the best existing solutions, to evaluate the
proportion of phenotypes predicted with simple string-matching.
Third, we demonstrate how utilizing and incorporating large
language models (LLMs) for span detection step can improve
performance especially with implicit phenotypes. In addition, this
experiment revealed that the annotations of existing dataset are
not exhaustive, and that LLM can identify relevant spans missed
by human labelers.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |
Copyright (Tous droits réservés)
|