Decision tree is to select the most information gain properties, classification.
The core part is to use information gain to judge the classification performance of attributes. The information gain is calculated as follows:
Information entropy:
Multiple categories are allowed.
Calculates the information gain for all attributes, choosing the largest root node as the decision tree. Then, the sample branches, continuing to determine the remaining properties of the information gain.
Information gain has drawbacks: information gain favors attributes with more values. Splitting information, using the gain ratio as a measure, is as follows:
The advantages of the decision tree: for samples with default characteristics, it is also possible to classify the samples, allowing the sample features to have a certain error and good robustness.
Cons: Easy overfitting, the resulting tree is too large. (can be used to stop the growth of the tree in advance; pruning; random forest methods to avoid overfitting)
Pruning method: First let the decision tree grow freely, allowing the occurrence of overfitting. The decision tree is then converted to an equivalent set of rules, removing nodes that have no effect on the result. As follows:
Repeat the above process, from bottom up, to traverse the node.
Reference:
"Machine learning"
Http://blog.sina.com.cn/s/blog_4e4dec6c0101fdz6.html
Http://www.cnblogs.com/tornadomeet/p/3395593.html
Review machine learning algorithms: Decision Trees