Transformer versus LSTM Language Models Trained on Uncertain ASR Hypotheses in Limited Data Scenarios - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2022

Transformer versus LSTM Language Models Trained on Uncertain ASR Hypotheses in Limited Data Scenarios

Imran Ahamad Sheikh
  • Fonction : Auteur
  • PersonId : 1000772

Résumé

In several ASR use cases, training and adaptation of domain-specific LMs can only rely on a small amount of manually verified text transcriptions and sometimes a limited amount of in-domain speech. Training of LSTM LMs in such limited data scenarios can benefit from alternate uncertain ASR hypotheses, as observed in our recent work. In this paper, we propose a method to train Transformer LMs on ASR confusion networks. We evaluate whether these self-attention based LMs are better at exploiting alternate ASR hypotheses as compared to LSTM LMs. Evaluation results show that Transformer LMs achieve 3–6% relative reduction in perplexity on the AMI scenario meetings but perform similar to LSTM LMs on the smaller Verbmobil conversational corpus. Evaluation on ASR N-best rescoring shows that LSTM and Transformer LMs trained on ASR confusion networks do not bring significant WER reductions. However, a qualitative analysis reveals that they are better at predicting less frequent words.
Fichier principal
Vignette du fichier
Paper_367.pdf (259.64 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03362828 , version 1 (02-10-2021)
hal-03362828 , version 2 (08-05-2022)

Identifiants

  • HAL Id : hal-03362828 , version 2

Citer

Imran Ahamad Sheikh, Emmanuel Vincent, Irina Illina. Transformer versus LSTM Language Models Trained on Uncertain ASR Hypotheses in Limited Data Scenarios. LREC 2022 - 13th Language Resources and Evaluation Conference, Jun 2022, Marseille, France. ⟨hal-03362828v2⟩
354 Consultations
748 Téléchargements

Partager

More