Regression tree: Using the least squares error criterion
The training set is: d={(x1,y1), (X2,y2), ..., (Xn,yn)}.
The output y is a continuous variable, the input is divided into M regions, respectively R1, R2,..., RM, the output values for each region are: C1,c2,..., CM the regression tree model can be expressed as:
The squared error is:
If you use the value s of feature J to divide the input space into two regions, respectively:
We need to minimize the loss function, namely:
The c1,c2 are the output mean of the R1,R2 interval respectively. (Here is different from the statistical learning textbook formula, in the textbook inside the C1,C2 all need to take the minimum value, but in the determined interval, when the average value of the C1,C2 interval output values of the square will be minimized, for the sake of simplicity, therefore, the output mean of the interval is directly used. )
In order to minimize the squared error, we need to iterate through each of the features in turn, calculate the current error of each possible segmentation point, and finally select the minimum shard error points to divide the input space into two parts, and then recursively pass the above steps until the end of the segmentation. The tree that this method slices is called the least squares regression tree.
Least squares regression tree generation algorithm:
1) loop through each feature J, and each value s of the feature, calculate the loss function for each Shard point (j,s), and select the minimum splitting point of the loss function.
2) Divide the current input space into two parts using the cut points from the previous step
3) The dividing points are then computed again after the divided two sections, and so on, until they cannot be divided.
4) Finally divide the input space into M-zone R1, R2,..., RM, and the resulting decision tree is:
where Cm is the average of the output value of the area.
Summary: The complexity of this method is high, especially in each search for the segmentation point, it is necessary to traverse all the current characteristics of all possible values, if there is a total of F features, each feature has N values, the resulting decision tree has an S internal node, the algorithm's time complexity is: O (f*n*s)
Cart regression tree Algorithm process