Efficient Language Models Combination: Application to Phrase Finding
Résumé
In this paper, we propose a new approach to combine several language models more efficiently than with a classical linear interpolation. This new language model is referred to as the Selected History Principle. In this model, the perplexity measure is used to select for each history, the best language model. This method is tested with two language models: bigram and distant bigram. It achieves an improvement of 6 points in terms of perplexity in comparison to a linear interpolation. We also take advantage from the Selected History Principle in order to retrieve a set of useful variable length phrases. 10000 of them have been selected and integrated into the vocabulary. Then, we build a phrase-based bigram model which achieves an improvement of 18% in comparison to a baseline bigram.