T: Representative features, | C|: Represents the total number of categories, CI represents the first category I
CF[I][J]: Represents the term class frequency, which means that the number of term I documents appears in the document of category J
Df[i]: Represents the term document frequency, that is, the number of documents that appear in the sample set
Docsperclass[i]: Represents the number of documents belonging to category I
Docs: Represents the total number of training documents
Note that the values above Cf[i][j], df[i], docspersclass[i] are the number of documents
- Information gain
P (CI) is the probability of the category CI appearing in the Document set; P (t) is the probability that the feature appears in the document Set, and P (CI |t) indicates the probability that the document belongs to the class CI when T appears in the document set, indicating the probability that the document belongs to the class CI when T is not present in the document set.
The calculation method is as follows:
For the convenience of calculation, the current characteristic T is equivalent to the first feature Ti
- Mutual information
Unlike information gain, mutual information is the mutual information between computing features and a category, and information gain is the information gain of computing features and all categories, and in specific applications, the expectation of mutual information can be selected or the value of the maximum mutual information of a category is selected as the value of the mutual information of the feature.
The calculation formula is as follows:
where P (t) is the probability that the feature appears in the Document Set, p (T|C) represents the number of documents in category C that contain the feature T. The calculation method is as follows:
- Chi-Square statistics
In a specific application, it is often selected as the value of the chi-square statistic of the feature and the value of the most significant of a category's Chi-square statistic.
where n is the total number of documents, A: the feature T in the document set and the number of documents that belong to Class C; B: The number of documents that feature T appears and Class C does not appear; C: Feature t does not appear and the number of documents that the Class C does not appear; The calculation formula is as follows:
4. Expected cross-entropy
The only difference from the information gain is that the expected cross-entropy (expected crosses entroy,ece) does not take into account the situation where the feature does not appear. The formula is as follows:
The calculation formula is as follows:
Feature dimensionality Reduction (2): A detailed analysis of feature evaluation functions in feature selection