Maximum Entropy Language Modeling with Non-Local Dependencies

Jun Wu, Johns Hopkins University

Statistical models of natural language are an important component of several applications such as speech recognition, handwriting and optical character recognition, spelling correction and machine translation. A language model, in most cases, assigns probabilities to the “next” word \(w_k\) in a sentence based upon the preceding words, or “history” \(w_1,w_2,\cdots,w_{k-1}\). N-gram models, which predict \(w_k\) based on a few preceding words, have been the main story of language models so far. Due to their local (Markov) dependence assumption, sentences preferred by N-gram models typically lack semantic coherence and syntactic well-formedness.

A new statistical language model is presented which combines collocational dependencies with two important sources of long-range statistical dependence in natural language: the syntactic structure and the topic of a sentence. These dependencies or constraints are integrated using the maximum entropy technique. Substantial improvements are demonstrated over a trigram model in both perplexity and speech recognition accuracy on the Switchboard task. A detailed analysis of the performance of this language model is provided in order to characterize the manner in which it performs better than a standard N-gram model. It is shown that topic dependencies are most useful in predicting words which are semantically related by the subject matter of the conversation. Syntactic dependencies on the other hand are found to be most helpful in positions where the best predictors of the following word are not within N-gram range due to an intervening phrase or clause. It is also shown that these two methods individually enhance an N-gram model in complementary ways and the overall improvement from their combination is nearly additive.