Adapting vs. Pre-training Language Models for Historical Languages - Inria - Institut national de recherche en sciences et technologies du numérique
Article Dans Une Revue Journal of Data Mining and Digital Humanities Année : 2022

Adapting vs. Pre-training Language Models for Historical Languages

Enrique Manjavacas
  • Fonction : Auteur
  • PersonId : 1128496
Lauren Fonteyn
  • Fonction : Auteur
  • PersonId : 1128497

Résumé

As large language models such as BERT are becoming increasingly popular in Digital Humanities (DH), the question has arisen as to how such models can be made suitable for application to specific textual domains, including that of 'historical text'. Large language models like BERT can be pretrained from scratch on a specific textual domain and achieve strong performance on a series of downstream tasks. However, this is a costly endeavour, both in terms of the computational resources as well as the substantial amounts of training data it requires. An appealing alternative, then, is to employ existing 'general purpose' models (pre-trained on present-day language) and subsequently adapt them to a specific domain by further pre-training. Focusing on the domain of historical text in English, this paper demonstrates that pre-training on domain-specific (i.e. historical) data from scratch yields a generally stronger background model than adapting a present-day language model. We show this on the basis of a variety of downstream tasks, ranging from established tasks such as Part-of-Speech tagging, Named Entity Recognition and Word Sense Disambiguation, to ad-hoc tasks like Sentence Periodization, which are specifically designed to test historically relevant processing.
Fichier principal
Vignette du fichier
JDMH_extended_camera_2.pdf (535.97 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03592137 , version 1 (01-03-2022)
hal-03592137 , version 2 (04-04-2022)
hal-03592137 , version 3 (13-06-2022)

Identifiants

Citer

Enrique Manjavacas, Lauren Fonteyn. Adapting vs. Pre-training Language Models for Historical Languages. Journal of Data Mining and Digital Humanities, 2022, NLP4DH, ⟨10.46298/jdmdh.9152⟩. ⟨hal-03592137v3⟩
385 Consultations
3416 Téléchargements

Altmetric

Partager

More