In the previous article, the ID3 decision tree based on information gain was introduced. The basic decision tree concept and the computation of ID3 decision tree based on information gain are introduced in this paper. This article describes how to use Python to implement a ID3 decision tree, where the main code from the machine learning combat in the book, I made some changes to it, adding some content. pseudo code for the decision tree.
The decision tree can be generated using a recursive implementation, in the watermelon book gives the decision tree pseudo code:
Input: Training set d= (x1,y1), (x2,y2),..., (xm,ym) d={(x_1,y_1), (x_2,y_2),..., (X_m,y_m)};
The property set A=A1,A2,..., ad a={a_1,a_2,..., a_d}.
Process: function Treegenerate (d,a)
1: Nodes node generation;
2:if D samples all belong to the same category C then
3: Mark node as C-Class leaf node; return
4:end if
5:if A=∅or D has the same value as the sample on A then
6: Mark node as a leaf node whose category is marked as the class with the highest number of samples in D; return
7:end if
8: Choose the best partition attribute from a a∗a_*;
9:for a∗a_* each value av∗a_*^v do
10: Generates a branch for node; the Dv D_v represents a subset of samples in D D that are av∗a_*^v on a∗a_*;
11:if Dv D_v is empty Then
12: Mark the branch node as a leaf node whose category is labeled as the class with the most samples in D D; return
13:else
14: Treegenerate (dv,a d_v,a \ a∗{a_*}) as the branch node.
15:end if
16:end for
Output: A decision tree with node as the root node
Three conditions for ending recursion in pseudocode:
Line 2nd: At this point in sample D, all of the samples belong to a category, such as the good melon, then the explanation does not need to be divided.
Line 5th: If the property collection is empty at this time or if all of the samples have the same values for each property at this time, for example, the remaining three watermelon, the three watermelon Genty, color, sound is the same, this time can not be divided according to the attributes, so in the remaining watermelon to find the largest number of categories.
12th: If the dataset on a certain attribute has no samples, such as after several partitions, the rest of the watermelon color has no plain this melon, we let the moment plain this melon category is equal to the current node's parent node with the largest number of samples of the category.
In the 14th row, a a\ a∗a_*, which means set subtraction, also says to drop the collection in set A a∗