When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models - Inria - Institut national de recherche en sciences et technologies du numérique
Pré-Publication, Document De Travail Année : 2020

When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models

Résumé

Transfer learning based on pretraining language models on a large amount of raw data has become a new norm to reach state-of-the-art performance in NLP. Still, it remains unclear how this approach should be applied for unseen languages that are not covered by any available large-scale multilingual language model and for which only a small amount of raw data is generally available. In this work, by comparing multilingual and monolingual models, we show that such models behave in multiple ways on unseen languages. Some languages greatly benefit from transfer learning and behave similarly to closely related high resource languages whereas others apparently do not. Focusing on the latter, we show that this failure to transfer is largely related to the impact of the script used to write such languages. Transliterating those languages improves very significantly the ability of large-scale multilingual language models on downstream tasks.

Dates et versions

hal-03109106 , version 1 (13-01-2021)

Identifiants

Citer

Benjamin Muller, Antonis Anastasopoulos, Benoît Sagot, Djamé Seddah. When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models. 2020. ⟨hal-03109106⟩
79 Consultations
0 Téléchargements

Altmetric

Partager

More