In this section, we only consider a feature word selection frame ig (infomation gain ).
Two kinds of probability Modeling
The first type is called classical probability modeling. That is the accepted one.
That is to say, this method considers that the probability of each category can be based on the probability of two classes in the training corpus.ArticleIn my experiment, the number of two classes is equal, so each of them is 1/2.
The article serves as a bridge between words and categories. Therefore, when calculating TF (T, C), it is possible to generate (1) or polynomial distribution (2) based on the document ). There are two probability calculation methods. In the case of (1), only consider whether a word appears in the article. If it appears, it is 1; otherwise, it is 0. In the case of (2), consider not only whether a word appears in the article, but also the number of times it appears.
The following are the experiment results in scenario 1 and Case 2:
Scenario 1:
Case 2
After comparison, we found that the values with the highest accuracy are the same as those with dimensions. Not only that,
In fact, in two cases, the average accuracy of the five cross-verification tasks in each feature dimension is surprisingly consistent. (do not doubt that I have forged data, not necessary)
The average accuracy calculated by the two methods is as follows (the image only shows part, and I will package and upload the accuracy data)