A dictionary-based full-segmentation algorithm for Chinese word segmentation algorithm

Source: Internet
Author: User

When using a dictionary-based word breaker, if we solve the following 4 questions:

1. how to find out all the words in a sentence? As long as there is in the dictionary must find out.

2. How to use the phrase found in 1 to synthesize a complete sentence ? The combined sentence should be the same as the original sentence.

3. How do you make sure that a combination of 2 sentences contains all possible word order?

4. How to choose the most perfect one from all possible word order as the final participle result?

Then our word segmentation method is called: Dictionary-based full-segmentation algorithm .


let's illustrate, for example, a sentence: China .

Suppose the dictionary contains the following words:

People's Republic of China Chinese People's Republic of

The first step , iterate through every word in the sentence, find out all the words starting with that word, and keep the word for the next step, with the following results:

1, [People's Republic of China, Chinese people, China, middle] 2, [Chinese, Hua]3, [People's Republic, people, people]4, [people] 5, [Republic, Republican, total]6, [and]7, [country]

Step Two , using the first step to find out the words and words combined into a complete sentence, we analyze the first step of the resulting result is the data shown above, the number of rows is the length of the sentence, the result of each line is the beginning of the word, how many words, including the word itself, each line of the first Word connect prompt together is the original sentence.

We look at the first word: The People's Republic of China, just this one word will constitute a complete sentence, OK we combine to complete a complete sentence.

Next look at the second word: Chinese people, we select the word and then choose the second word, under the trouble, we can have 3 choices, that is, the 5th line of 3 words [Republic, Republic, total], if we choose the Republic, then we have completed a complete sentence. Don't forget that there are two other options ...

Next look at the third word and the fourth Word, the fifth Word will not have to read, because the word after the fourth word for the beginning of the impossible Group to synthesize complete sentences, at least the first word!

Step three , from the second step of the analysis of the sentence of the choice of words we can know how to find out all possible word order composition of the complete sentence does not seem very simple, the process is like traversing n (n equals the first line of the above data) tree, we need to count how many leaves on the tree, The details of the algorithm are not described here, as detailed in the algorithm details here, all possible word order is as follows:


" Span style= "LINE-HEIGHT:22.5PX; font-size:20px; " > from the results of the third step above, we can see that there are 21 kinds of possible word order, how do we choose the perfect one as the final result? We can use the Ngram model to choose, specific details please see this article: a use of Ngram model to eliminate ambiguity in Chinese word segmentation method, we use Ngram to the above 21 segmentation results to calculate the score, if the score is the same, then select the number of cut-off words of the smallest segmentation results (minimum word segmentation principle) , see the following scoring process:

Bigram initialization complete, bigram number of data bars: 15,194,432 USD model   people: Republic   Score: 3.3166249 Two model   people: Total   Score: 4.0 Two metamodel   people: Republic   Score: 3.3166249 Two RMB model   people: Total   Score: 4.0 People's Republic   earn score: 3.3166249 People's Republic   score: 3.3166249ngram score: 1, Number of words =5ngram score =4.0[Chinese,  people,  total,  and,  country] 2, the number of words =6ngram score =4.0[,  China,  people,  total,  and,   Country] 3, the number of words =4ngram score =3.3166249[, ,  People,  Republic] 4, the number of words =3ngram score =3.3166249[Chinese,  people,  Republic]5 , the number of words =2ngram score =3.3166249[,  People's Republic of China]6, the number of words =3ngram score =3.3166249[,  China,  People's Republic]7, the number of words =5ngram score =0.0[,   Hua,  people,  Republic,  Country]8, the number of words =4ngram scores =0.0[Chinese people,  total,  and,  country]9, the number of words =3ngram score =0.0[Chinese People,   Republic,  Country]10, the number of words =4ngram score =0.0[Chinese, ,  people,  Republic]11, the number of words =6ngram score =0.0[,  China,  People,  People,  Republic,  Country]12, the number of words =5ngram score =0.0[Chinese, , ,  Republic,  Country]13, the number of words =5ngram score =0.0[in,  China,   People, ,  Republic]14, the number of words =6ngram score =0.0[Chinese, ,  people,  total,  and,  country]15, the number of words =4ngram score =0.0[ Chinese,  people,  Republic,  Country]16, the number of words =7ngram score =0.0[,  hua,  person,  people,  total,  and,  country]17, the number of words =1ngram score =0.0[People's Republic of China]18, the number of words = 4ngram score =0.0[,  Chinese,  people,  Republic]19, the number of words =2ngram score =0.0[Chinese People,  Republic]20, the number of words =6ngram scores =0.0[,  Chinese,  , ,  and,  country]21, the number of words =5ngram score =0.0[Chinese, ,  people,  Republic,  Country] only retain the maximum score: 1, the number of words =5ngram score =4.0[Chinese,  people,  total,  and,  countries] 2, the number of words =6ngram score =4.0[,  China,  people,  total,  and,  countries] Score the same number of words to choose the least: [Chinese,  people, ,  and, ], number of words: 5

Here the choice [Chinese, people, total, and, State] and not [Chinese, People, Republic] reason is: bigram data of people: The frequency of the emergence of a common higher than the people: the Republic

Thus we know that the good and bad of the dictionary and Ngram data directly affect the quality of the segmentation results.


Java open Source implementation based on full-segmentation algorithm of Dictionary



A dictionary-based full-segmentation algorithm for Chinese word segmentation algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.