Information gain of Feature selection method for text classification

Source: Internet
Author: User

To make feature selection, the purpose is to select the most useful feature item for classification. But to the computer to deal with, need to quantify. So how to choose the most helpful, there is a variety of methods.

In general, the choice of features in 3000, the overall benefit is very good, and then upward, occupy space increased, but the result growth is not obvious.

Information gain: It measures the importance of a feature item based on how much information it can provide for the entire classification, thus determining the trade-offs of the feature item.

A feature item TI's information gain refers to the difference in the amount of data that can be provided for the entire classification when there is a feature or absence of that feature, in which the amount is measured by entropy.

Entropy can be regarded as the number of uncertainties that describe a random variable. The greater the entropy, the greater the uncertainty, and the less likely it is to correctly estimate its value.

"I've always felt that entropy is a great invention. We can't measure the amount of information, and the invention of entropy solves the problem completely. Worshiped Shannon. 』

specifically to the text classification, we now have a term ti, to calculate its information gain to determine whether it is a classification is helpful. So, first look at the entropy of the document without considering any characteristics, that is, how much information we have when we classify it without any characteristics. Then look at how much information we can have after considering the feature. Obviously, the difference between the two is the information that this feature brings to us. This time may have doubts, the front information is few, the back of the information more ah, subtraction is not negative.

No We use entropy here, which is the degree of confusion and uncertainty. The calculation of how much information is calculated is how much uncertainty. Therefore, the uncertainty in the front, help us classify less useful information, considering the new features, the latter is less uncertain, more information. Therefore, the difference between the two is the information that this feature brings to us.


Reference: "Statistical natural language Processing" Zongchengqing


The biggest problem with information gain is that it can only look at the contribution of features to the whole system, not specific to a category, which makes it suitable only for the so-called "global" feature selection (meaning that all classes use the same set of features), rather than "local" feature selection (each category has its own set of features, Because of some words, there is a distinction between this category, and the other is insignificant.

Reference: Http://baike.baidu.com/view/1231985.htm?fromTaglist

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.