Is Anisotropy Inherent to Transformers? - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Is Anisotropy Inherent to Transformers?

Résumé

The representation degeneration problem is a phenomenon that is widely observed among self-supervised learning methods based on Transformers. In NLP, it takes the form of anisotropy, a singular property of hidden representations which makes them unexpectedly close to each other in terms of angular distance (cosine-similarity). Some recent works tend to show that anisotropy is a consequence of optimizing the cross-entropy loss on long-tailed distributions of tokens. We show in this paper that anisotropy can also be observed empirically in language models with specific objectives that should not suffer directly from the same consequences. We also show that the anisotropy problem extends to Transformers trained on other modalities. Our observations tend to demonstrate that anisotropy might actually be inherent to Transformers-based models.

Dates et versions

hal-04264026 , version 1 (29-10-2023)

Licence

Paternité

Identifiants

Citer

Nathan Godey, Eric Villemonte de La Clergerie, Benoît Sagot. Is Anisotropy Inherent to Transformers?. 2023. ⟨hal-04264026⟩
44 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More