Black-box language model explanation by context length probing - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2023

Black-box language model explanation by context length probing

Ondřej Cífka
Antoine Liutkus

Abstract

The increasingly widespread adoption of large language models has highlighted the need for improving their explainability. We present context length probing, a novel explanation technique for causal language models, based on tracking the predictions of a model as a function of the length of available context, and allowing to assign differential importance scores to different contexts. The technique is model-agnostic and does not rely on access to model internals beyond computing token-level probabilities. We apply context length probing to large pre-trained language models and offer some initial analyses and insights, including the potential for studying long-range dependencies. The source code and a demo of the method are available.
Fichier principal
Vignette du fichier
main.pdf (749.85 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Licence : Copyright

Dates and versions

hal-03917930 , version 1 (13-11-2023)

Licence

Attribution

Identifiers

Cite

Ondřej Cífka, Antoine Liutkus. Black-box language model explanation by context length probing. ACL 2023 - 61st Annual Meeting of the Association for Computational Linguistics, Jul 2023, Toronto, Canada. pp.1067--1079, ⟨10.18653/v1/2023.acl-short.92⟩. ⟨hal-03917930⟩
67 View
14 Download

Altmetric

Share

Gmail Facebook X LinkedIn More