Notes on the beauty of mathematics: a Statistical language model

Source: Internet
Author: User

The statistical language model (statistical Language models) is a mathematical model, which is the basis of all natural language processing, widely used in machine translation, speech recognition and other fields, and it is designed to solve the problem of language recognition.


In natural language processing, for how to judge a word sequence as understandable and meaningful, Jarinik presents a simple statistical model: whether a sentence is reasonable or not, let's see how likely it is. As for probability, it is measured by probabilities. The probability of a sentence consisting of a sequence of words appearing in human language to determine whether the sequence of words is grammatically correct or not.


The core of this method is to infer the probability of the occurrence of a word sequence by abstracting and calculating the conditional probabilities of each word appearing in a sequence of words, but since each word appears in relation to all the words in front of it, the probability of the last word's conditional probability is too much to be estimated.


From 19th century to the beginning of 20th century, Russian mathematician Markov , proposed a rather effective method: assuming that the probability of any word appears only with the word in front of it, then the problem is simple. This assumption is mathematically known as the Markov hypothesis. This hypothesis corresponds to a statistical language model called a two-tuple model. Of course, it can also be assumed that a word is determined by the previous N-1 word, the corresponding model is slightly more complex, known as the N-ary model.


This is the statistical language model that has played an important role in Google's voice search and in Chinese and English automatic translation (Rosetta). In 2007, when Google's Rosetta system first participated in the U.S. Standards Agency (NIST) evaluation of machine translation systems, it blockbuster first, with scores that were higher than all rule-based systems. The secret weapon here is a language model that is a lot more than any other competitor.


In the late 1980s, when IBM presented the statistical language model for more than 10 years, he also worked as a PhD student at Carnegie Mellon University. The speech recognition problem of 997 words was simplified into a recognition problem of 20 words by using the statistical language model, which realized the first time that a large vocabulary non-specific continuous speech recognition.


N-ary model, why does n generally take a small value? There are two main reasons for this: first, the size (or spatial complexity) of the N-ary model is almost an exponential function of N. The speed (or time complexity) of using the N-ary model is almost an exponential function. Therefore, n cannot be very large. When n is from 1 to 2, and then from 2 to 3 o'clock, the effect of the model rises significantly. And when the model from 3 to 4 o'clock, the effect of ascension is not very significant, and the cost of resources increased is very fast, so, unless the resources in order to achieve the ultimate, few people use more than four of the model. Google's Rosetta translation system and voice search system use the four-dollar model, which is stored in more than 500 Google servers.


When estimating the probability of a language model, many people ignore the 0 probability or lack of statistics. The art of training statistical language model is to solve the problem of probability estimation when the statistical sample is insufficient. In 1953, Turing (Good), under the guidance of his boss, Turing (the Daniel in computer history), proposed a probabilistic method of estimating reliable statistics in statistics while discounting unreliable statistical data, while giving a small fraction of the discounted probability to an unseen event. Good and Turing also gave a very nice formula for the re-estimating probability, which was later called the Good-turing Estimate.


The principle of this formula is this: For an unseen event, we cannot assume that the probability of its occurrence is 0, so we allocate a very small proportion of the total probability to these unseen events. In this way, the probability sum of the events you see is less than 1, so you need to turn down the probability of all the events you see. As for the small amount, it should be based on the "more untrustworthy statistical discount" method.

Notes on the beauty of mathematics: a Statistical language model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.