TOKEN is a MASK: Few-shot named entity recognition with pre-trained language models - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2022

TOKEN is a MASK: Few-shot named entity recognition with pre-trained language models

Abstract

Transferring knowledge from one domain to another is of practical importance for many tasks in natural language processing, especially when the amount of available data in the target domain is limited. In this work, we propose a novel few-shot approach to domain adaptation in the context of Named Entity Recognition (NER). We propose a two-step approach consisting of a variable base module and a template module that leverages the knowledge captured in pre-trained language models with the help of simple descriptive patterns. Our approach is simple yet versatile and can be applied in few-shot and zero-shot settings. Evaluating our lightweight approach across a number of different datasets shows that it can boost the performance of state-of-the-art baselines by 2-5% F1-score.
No file

Dates and versions

hal-03746143 , version 1 (04-08-2022)

Identifiers

  • HAL Id : hal-03746143 , version 1

Cite

Ali Davody, David Ifeoluwa Adelani, Thomas Kleinbauer, Dietrich Klakow. TOKEN is a MASK: Few-shot named entity recognition with pre-trained language models. 25th International Conference on Text, Speech and Dialogue, Sep 2022, Brno, Czech Republic. ⟨hal-03746143⟩
16 View
0 Download

Share

Gmail Facebook X LinkedIn More