Headless Language Models: Learning without Predicting with Contrastive Weight Tying - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Headless Language Models: Learning without Predicting with Contrastive Weight Tying

Résumé

Self-supervised pre-training of language models usually consists in predicting probability distributions over extensive token vocabularies. In this study, we propose an innovative method that shifts away from probability prediction and instead focuses on reconstructing input embeddings in a contrastive fashion via Constrastive Weight Tying (CWT). We apply this approach to pretrain Headless Language Models in both monolingual and multilingual contexts. Our method offers practical advantages, substantially reducing training computational requirements by up to 20 times, while simultaneously enhancing downstream performance and data efficiency. We observe a significant +1.6 GLUE score increase and a notable +2.7 LAMBADA accuracy improvement compared to classical LMs within similar compute budgets.

Dates et versions

hal-04264051 , version 1 (30-10-2023)

Licence

Paternité

Identifiants

Citer

Nathan Godey, Eric Villemonte de La Clergerie, Benoît Sagot. Headless Language Models: Learning without Predicting with Contrastive Weight Tying. 2023. ⟨hal-04264051⟩
31 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More