Training RNN Language Models on Uncertain ASR Hypotheses in Limited Data Scenarios - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue Computer Speech and Language Année : 2023

Training RNN Language Models on Uncertain ASR Hypotheses in Limited Data Scenarios

Résumé

Training domain-specific automatic speech recognition (ASR) systems requires a suitable amount of data comprising the target domain. In several scenarios, such as early development stages, privacy-critical applications, or under-resourced languages, only a limited amount of in-domain speech data and an even smaller amount of manual text transcriptions, if any, are available. This motivates the study of ASR language model (LM) training on a limited amount of in-domain speech data. Early works have attempted training of n-gram LMs from ASR N-best lists and lattices but training and adaptation of recurrent neural network (RNN) LMs from ASR transcripts has not received attention. In this work, we study training and adaptation of RNN LMs using alternate, uncertain ASR hypotheses embedded in ASR confusion networks obtained from target domain speech data. We explore different methods for training the RNN LMs to deal with the uncertain input sequences. The first method extends the cross-entropy objective into a Kullback–Leibler (KL) divergence based training loss, the second method formulates a training loss based on a hidden Markov model (HMM), and the third method performs training on paths sampled from the confusion networks. These methods are applied to limited data setups including telephone and meeting conversation datasets. Performance is evaluated under two settings wherein no manual transcriptions or a small amount of manual transcriptions are available to aid the training. Moreover, a model adaptation setting is also evaluated wherein the RNN LM is pre-trained on an out-of-domain conversational corpus. Overall the sampling method for training RNN LMs on ASR confusion networks performs the best, and results in up to 12% relative reduction in perplexity on the meeting dataset as compared to training on ASR 1-best hypotheses, without any manual transcriptions. However, the perplexity reductions do not translate into equivalent WER reductions. A detailed analysis of the perplexity reductions obtained by the different methods is performed in order to understand this effect.
Fichier principal
Vignette du fichier
cn2lm_manuscript.pdf (419.75 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03327306 , version 1 (27-08-2021)
hal-03327306 , version 2 (21-08-2023)

Licence

Paternité

Identifiants

Citer

Imran Ahamad Sheikh, Emmanuel Vincent, Irina Illina. Training RNN Language Models on Uncertain ASR Hypotheses in Limited Data Scenarios. Computer Speech and Language, 2023, pp.101555. ⟨10.1016/j.csl.2023.101555⟩. ⟨hal-03327306v1⟩
314 Consultations
290 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More