Feature dimensionality Reduction (2): A detailed analysis of feature evaluation functions in feature selection

Source: Internet
Author: User

T: Representative features, | C|: Represents the total number of categories, CI represents the first category I

CF[I][J]: Represents the term class frequency, which means that the number of term I documents appears in the document of category J

Df[i]: Represents the term document frequency, that is, the number of documents that appear in the sample set

Docsperclass[i]: Represents the number of documents belonging to category I

Docs: Represents the total number of training documents

Note that the values above Cf[i][j], df[i], docspersclass[i] are the number of documents

    1. Information gain

      P (CI) is the probability of the category CI appearing in the Document set; P (t) is the probability that the feature appears in the document Set, and P (CI |t) indicates the probability that the document belongs to the class CI when T appears in the document set, indicating the probability that the document belongs to the class CI when T is not present in the document set.

      The calculation method is as follows:

For the convenience of calculation, the current characteristic T is equivalent to the first feature Ti

    1. Mutual information

      Unlike information gain, mutual information is the mutual information between computing features and a category, and information gain is the information gain of computing features and all categories, and in specific applications, the expectation of mutual information can be selected or the value of the maximum mutual information of a category is selected as the value of the mutual information of the feature.

      The calculation formula is as follows:

      where P (t) is the probability that the feature appears in the Document Set, p (T|C) represents the number of documents in category C that contain the feature T. The calculation method is as follows:

    2. Chi-Square statistics

In a specific application, it is often selected as the value of the chi-square statistic of the feature and the value of the most significant of a category's Chi-square statistic.

where n is the total number of documents, A: the feature T in the document set and the number of documents that belong to Class C; B: The number of documents that feature T appears and Class C does not appear; C: Feature t does not appear and the number of documents that the Class C does not appear; The calculation formula is as follows:

4. Expected cross-entropy

The only difference from the information gain is that the expected cross-entropy (expected crosses entroy,ece) does not take into account the situation where the feature does not appear. The formula is as follows:

The calculation formula is as follows:

Feature dimensionality Reduction (2): A detailed analysis of feature evaluation functions in feature selection

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.