Improving language models by using distant information - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2007

Improving language models by using distant information

Abstract

This study examines how to take originally advantage from distant information instatistical language models. We show that it is possible to use n-gram models considering histories different from those used during training. These models are called crossing context models. Our study deals with classical and distant n-gram models. A mixture of four models is proposed and evaluated. A bigram linear mixture achieves an improvement of 14% in terms of perplexity. Moreover the trigram mixture outperforms the standard trigram by 5.6%. These improvements have been obtained without complexifying standard n-gram models. The resulting mixture language model has been integrated into a speech recognition system. Its evaluation achieves a slight improvement in terms of word error rate on the data used for the francophone evaluation campaign ESTER. Finally, the impact of the proposed crossing context language models on performance is presented according to various speakers.
Fichier principal
Vignette du fichier
ISSAP2007.pdf (100.78 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

inria-00187084 , version 1 (13-11-2007)

Identifiers

  • HAL Id : inria-00187084 , version 1

Cite

Armelle Brun, David Langlois, Kamel Smaïli. Improving language models by using distant information. International Symposium on Signal Processing and its Applications - ISSPA 2007, Feb 2007, Sharjah, United Arab Emirates. ⟨inria-00187084⟩
77 View
311 Download

Share

Gmail Facebook Twitter LinkedIn More