%0 Conference Proceedings %T Transformer versus LSTM Language Models Trained on Uncertain ASR Hypotheses in Limited Data Scenarios %+ Vivoka %+ Speech Modeling for Facilitating Oral-Based Communication (MULTISPEECH) %A Sheikh, Imran, Ahamad %A Vincent, Emmanuel %A Illina, Irina %< avec comité de lecture %B LREC 2022 - 13th Language Resources and Evaluation Conference %C Marseille, France %8 2022-06-20 %D 2022 %K Transformer %K language model %K confusion networks %Z Computer Science [cs]/Computation and Language [cs.CL]Conference papers %X In several ASR use cases, training and adaptation of domain-specific LMs can only rely on a small amount of manuallyverified text transcriptions and sometimes a limited amount of in-domain speech. Training of LSTM LMs in such limited data scenarios can benefit from alternate uncertain ASR hypotheses, as observed in our recent work. In this paper, we propose a method to train Transformer LMs on ASR confusion networks. We evaluate whether these self-attention based LMs are better at exploiting alternate ASR hypotheses as compared to LSTM LMs. Evaluation results show that Transformer LMs achieve 3–6% relative reduction in perplexity on the AMI scenario meetings but perform similar to LSTM LMs on the smaller Verbmobil conversational corpus. Evaluation on ASR N-best rescoring shows that LSTM and Transformer LMs trained on ASR confusion networks do not bring significant WER reductions. However, a qualitative analysis reveals that they are better at predicting less frequent words. %G English %Z Grid'5000 %2 https://inria.hal.science/hal-03362828v2/document %2 https://inria.hal.science/hal-03362828v2/file/Paper_367.pdf %L hal-03362828 %U https://inria.hal.science/hal-03362828 %~ CNRS %~ INRIA %~ IRISA %~ OPENAIRE %~ INRIA_TEST %~ INRIA-LORRAINE %~ LORIA2 %~ INRIA-NANCY-GRAND-EST %~ GRID5000 %~ TESTALAIN1 %~ UNIV-LORRAINE %~ INRIA2 %~ LORIA %~ LORIA-NLPKD %~ UR1-MATH-STIC %~ UR1-UFR-ISTIC %~ INRIA-300009 %~ SILECS %~ UR1-MATH-NUM