Text Processing Basics 1. Regular Expressions (Regular Expressions)
Regular expressions are important text preprocessing tools.
Part of the regular notation is truncated below:
2. Participle (word tokenization)
We work with uniform normalization (text normalization) for every single text processing.
Text size How many words?
We introduce variable type and token
Represents the elements in the dictionary (an element of the vocabulary) and the presence of this element in the article (an instance of this type in running text).
If you define N = number of tokens and V = Vocabulary = set of types,| v| Is the size of the vocabulary, then according to the research work of Church and Gale (1990), we know: | v| > O (N?), we can see the above conclusions from the statistics of Shakespeare works and Google multivariate models, among others:
Word Segmentation algorithm:
Sometimes we implement non-alphabetic characters as token separators as a simple word breaker, but there are a number of disadvantages like:
- Finland ' s capital–> Finland finlands Finland ' s?
- What ' Re, I ' m, isn ' t and what's, I am, is not
- Hewlett-Packard, Hewlett Packard?
- lowercase, lower-case lowercase lower case?
Although the above method is effective in the language of English, which contains fixed separators, there is no space between words such as Chinese and Japanese.
Chinese word tokenization in Chinese or word segmentation, the simplest and most commonly used method is called the maximum matching method maximum Matching (also called greedy)
Where the forward maximum matching method FFM steps as follows (forward Maximum Matching):
- Forward from the back to the word, the dictionary to match the words
- If no match succeeds, delete the last word and continue the match.
- Repeat 2 steps until the match succeeds, then take the matched word out as a successful word fragment and continue to match the remaining words (the ones that were deleted earlier)
The specific method can refer to this article:
Normalization of text (normalization):
It mainly includes capitalization conversion, stemming, simplified conversion and so on.
Segmentation (sentence segmentation and decision Trees):
Like!? Such symbols are clearly divided in meaning, but in English. " "will be used in a variety of scenarios, such as the abbreviation" INC "," Dr ",". 2% "," 4.3 "and so on, can not be processed by simple regular expression, we introduced the decision tree classification method to determine whether the sentence is interrupted endofsentence/ Notendofsentence:
We can solve the problem by using the decision tree, we can also use other classification methods based on the above characteristics such as: Logistic regression, SVM, neural network and so on.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
The second course of natural language processing, Stanford University, "Text Processing basics (Basic text Processing)"