Improving language models by using distant information - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2007

Improving language models by using distant information

Résumé

This study examines how to take originally advantage from distant information instatistical language models. We show that it is possible to use n-gram models considering histories different from those used during training. These models are called crossing context models. Our study deals with classical and distant n-gram models. A mixture of four models is proposed and evaluated. A bigram linear mixture achieves an improvement of 14% in terms of perplexity. Moreover the trigram mixture outperforms the standard trigram by 5.6%. These improvements have been obtained without complexifying standard n-gram models. The resulting mixture language model has been integrated into a speech recognition system. Its evaluation achieves a slight improvement in terms of word error rate on the data used for the francophone evaluation campaign ESTER. Finally, the impact of the proposed crossing context language models on performance is presented according to various speakers.
Fichier principal
Vignette du fichier
ISSAP2007.pdf (100.78 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

inria-00187084 , version 1 (13-11-2007)

Identifiants

  • HAL Id : inria-00187084 , version 1

Citer

Armelle Brun, David Langlois, Kamel Smaïli. Improving language models by using distant information. International Symposium on Signal Processing and its Applications - ISSPA 2007, Feb 2007, Sharjah, United Arab Emirates. ⟨inria-00187084⟩
87 Consultations
354 Téléchargements

Partager

More