The Goodwood-Turing estimate solves the sparse problem of data in the N-ary syntax model (N-GRAM). The main idea is to reduce the probability of non-zero n-ary syntax to some low probability n-ary syntax to modify the deviation between the maximum likelihood estimate and the true probability. is a more practical smoothing algorithm.
Graph: Left-to-right change: the probability of a part of a visible event is absorbed into the unseen event
Taking the probability of statistical dictionary as an example, this paper illustrates the formula of the Goodwood-graph Bell.
It is assumed that the word "R" appears in Corpus with NR, and the number of occurrences of 0 (non-sign-in words) is N0. The number of words in the corpus is N, apparently
The relative frequency of the words that appear r times in the dictionary is r/n. If no optimization is done, the relative frequency is used as the probability estimate for these words.
Adding when R is very small, so the statistics may be unreliable, so the words that appear r are used to calculate their probabilities using a smaller number, which is Dr, not R. Goodwood-Turing estimates that the DR is calculated according to the following formula:
Dr= (r+1) * NR+1/NR
Obviously
In general, the number of words appearing once is more than the number of words appearing two times, and the number of words appearing two times is more than three occurrences. This is called the Zipf law. is a small corpus in which the number of occurrences R and the corresponding number of NR relationships.
This gives the unsigned word a small non-0 value, which solves the 0 probability problem. At the same time, the probability of the occurrence of low-frequency words is lowered. In the actual natural language processing, the probability of the occurrence of a word exceeding a certain threshold is not lowered; the probability of a word falling below this threshold is reduced; the sum of the lowered frequency equals the probability of the non-login word.
For the two-tuple (WI-1,WI) conditional probability estimation P (wi|wi-1) can also do the same processing. Since the previous word wi-1 predicts the next word wi, the sum of all possible conditions should be 1, i.e.
For a very small number of two-tuple (WI-1|WI), the number of occurrences will need to be discounted by the Texaco-Turing approach, which means that a portion of the probability is not allocated, leaving the two-tuple (WI-1WI) that is not logged in. Based on this idea, the probability formula for estimating the two-dollar model is as follows:
where T is a threshold, usually around 8-10, FGT represents the relative frequency after the ancient German-Turing estimate.
Link
Goodwood-Turing estimates