Transformer versus LSTM Language Models Trained on Uncertain ASR Hypotheses in Limited Data Scenarios - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2022

Transformer versus LSTM Language Models Trained on Uncertain ASR Hypotheses in Limited Data Scenarios

Imran Ahamad Sheikh
  • Fonction : Auteur
  • PersonId : 1000772

Résumé

In several ASR use cases, training and adaptation of domain-specific LMs can only rely on a small amount of manually verified text transcriptions and sometimes a limited amount of in-domain speech. Training of LSTM LMs in such limited data scenarios can benefit from alternate uncertain ASR hypotheses, as observed in our recent work. In this paper, we propose a method to train Transformer LMs on ASR confusion networks. We evaluate whether these self-attention based LMs are better at exploiting alternate ASR hypotheses as compared to LSTM LMs. Evaluation results show that Transformer LMs achieve 3-6% relative reduction in perplexity on the AMI scenario meetings but perform similar to LSTM LMs on the smaller Verbmobil conversational corpus.
Fichier principal
Vignette du fichier
ICASSP2022_Transformer_LM_01102021.pdf (245.26 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03362828 , version 1 (02-10-2021)
hal-03362828 , version 2 (08-05-2022)

Identifiants

  • HAL Id : hal-03362828 , version 1

Citer

Imran Ahamad Sheikh, Emmanuel Vincent, Irina Illina. Transformer versus LSTM Language Models Trained on Uncertain ASR Hypotheses in Limited Data Scenarios. LREC 2022 - 13th Language Resources and Evaluation Conference, Jun 2022, Marseille, France. ⟨hal-03362828v1⟩
358 Consultations
752 Téléchargements

Partager

More