Improving Statistical Language Models by Removing Impossible Events
Résumé
This paper deals with a new method which detects impossible bigrams in a space of $V^2$ bigrams designed from a vocabulary V and which discards them from a statistical language model. We claim that discarding ungrammatical events, which are impossible in a well written text, will improve language models and as well, reduce the complexity of search algorithms in speech recognition. The Purged Language Model (PLM) needs a set of impossible bigrams, which are detected by using automatic rules based on a class model, phonology rules, etc. Methods for redistributing the sum of probabilities issued from impossible bigrams among possible events have been developped. This idea allows us to take advantage of natural language constraints and to include linguistic criteria in statistical language models. The PLM has been tested on a test corpus of 2M words and achieves a perplexity improvement of 51% under certain conditions.