NLP | Natural Language Processing

Source: Internet
Author: User

What is annotation?A common task in natural language processing is annotation. (1) Part-Of-Speech Tagging (Part-Of-Speech Tagging): marks each word in a sentence as a Part Of Speech, such as a noun or verb. (2) name Entity Tagging: Mark special words in a sentence, such as addresses, dates, and names of characters.
This is a case of word-of-speech tagging. When a sentence is entered, the computer automatically marks the part of speech of each word.

This is an entity labeling case. When a sentence is entered, the computer automatically marks the entity category of a special word.


In a rough view, this is not a simple problem. First, each word can have multiple meanings, and different meanings are expressed in different situations. Second, the meaning or part of speech of a word is also influenced by multiple words.
Before finding a solution, we 'd better describe the problem in a mathematical language. When we get a sentence, we can regard it as a vector. So that the sentence s has a total of n words, and the I-th word is expressed by xi. Obviously, s = x1, x2,... xn. Therefore, the problem can be described as: For each word xi, We need to specify an annotation yi, so we can obtain the annotation y = y1, y2,... yn of the sentence.
To sum up, when training the model, we expect that for any sentence s, we need to obtain the probability p (y | s) of all possible annotations ), the y with the highest probability is the result we need. The final expression is tagging (s) = arg max (p (y | s )).
Next, we need to consider how to establish a training set and learn the above model. First, I need to obtain a corpus that has been labeled. There are several sentences in the corpus, and each word in each sentence has an identifier. Then, we can learn the conditional probability p (y, s) for all sentences in the corpus and the corresponding identifier y, that is, the probability of occurrence of a sentence and its corresponding identifier. Secondly, because the corpus cannot contain all possible sentences, we hope to get a broader expression. through Bayesian formula, we can see that p (y, s) = p (y) * p (s | y), p (y | s) = p (y) * p (s | y)/p (s ); we need to compare the maximum value in p (y | s) without obtaining p (y | s). Therefore, it is clear that the specific value of p (s) is not important, therefore, we only need to consider tagging (s) = arg max (p (y) * p (s | y )).
Because the corpus cannot store all objective sentences, we must find a way to estimate the values of p (y) and p (s | y, one of the most famous methods is the hidden Markov model.
The Hidden Markov Model is still back to the above problem. Given a sentence s = x1, x2 ,... xn, we provide an identifier combination y = y1, y2 ,... yn, so that y = arg max (p (y) * p (s | y) = arg max (p (x1, x2 ,..., xn, y1, y2 ,..., yn )).
According to the language model mentioned in the previous chapter, we still make some optimizations for each sentence: 1) Add a starting symbol "*", we define that all sentences start with "*", that is, X-1 = X0 = *; 2) add an ending symbol "STOP", and we define that all sentences end with "STOP.
At the same time, the hidden Markov model requires us to make some extra assumptions to simplify the model: 1) yk is only related to the first few elements, that is, the Semantic Relevance of the logo only affects the first few elements; 2) the word xk and the corresponding yk are not affected by other words, that is, p (xi | yi) are independent of each other.
After simplification, we take the third-level hidden Markov model as an example. The expression is p (y1, y2 ,... Yn | x1, x2 ,... Xn) = p (y1, y2 ,... Yn) * p (x1, x2 ,... Xn | y1, y2 ,... Yn) = ∏ q (yj | yj-2, yj-1) * ∏ e (xi | yi ). Obviously, in the simplified model, the frequency of appearance of a single word in the corpus is much higher than that of the sentence as a whole.
With the Hidden Markov Model, all we need to do is to estimate the parameters q (yj | yj-2, yj-1) and e (xi | yi ). Q (yj | yj-2, yj-1) has a detailed explanation in the previous chapter language model, and e (xi | yi) can be easily obtained by counting the appearance of each word in the corpus. However, in some special cases, if some words do not appear in the corpus, e (xi | yi) = 0 will lead to the probability of the overall sentence to 0. To solve this problem, we can adopt a simple solution:
1) First, all words in the corpus are divided into frequent words and non-frequent words (determined by a threshold); 2) e (xi | yi) of frequent words) statistics are obtained directly from the corpus. 3) Non-frequent words are divided into multiple groups by predefined rules, and e (xi | yi) is determined by calculating the Word Frequency of the group ).
For example, shows common grouping methods. This method works well for special words such as date, name, and abbreviation.

Algorithm complexity hypothesis we have trained to get q (yj | yj-2, yj-1) with e (xi | yi), given a sentence s = x1, x2 ,... xn, how do we obtain y = y1, y2 ,... yn. Method 1: The brute-force method traverses all the possible y1, y2,... yn combinations to calculate the probability and find the maximum probability value. Obviously, the time complexity of the brute-force method is not satisfactory. Method 2: Dynamic Planning, defining a dynamic planning expression m (k, u, v), k Represents the k-bit, u, v indicates the identification of the last two words in the clause composed of the first k. Therefore, the recursive equation can be expressed as m (k, u, v) = max (m (K-1, w, u) * q (v | w, u) * e (x | v )). There are many cases in leetcode to describe the dynamic planning method.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.