Efficient Language Models Combination: Application to Phrase Finding - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2001

Efficient Language Models Combination: Application to Phrase Finding

Résumé

In this paper, we propose a new approach to combine several language models more efficiently than with a classical linear interpolation. This new language model is referred to as the Selected History Principle. In this model, the perplexity measure is used to select for each history, the best language model. This method is tested with two language models: bigram and distant bigram. It achieves an improvement of 6 points in terms of perplexity in comparison to a linear interpolation. We also take advantage from the Selected History Principle in order to retrieve a set of useful variable length phrases. 10000 of them have been selected and integrated into the vocabulary. Then, we build a phrase-based bigram model which achieves an improvement of 18% in comparison to a baseline bigram.
Fichier non déposé

Dates et versions

inria-00100650 , version 1 (26-09-2006)

Identifiants

  • HAL Id : inria-00100650 , version 1

Citer

David Langlois, Kamel Smaïli, Jean-Paul Haton. Efficient Language Models Combination: Application to Phrase Finding. Proceedings of the International Workshop "Speech and Computer" - SPECOM 2001, 2001, Moscow, Russia, 4 p. ⟨inria-00100650⟩
60 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More