Full Stack Engineer Development Manual (author: Shangpeng)
Python Data Mining Series tutorials
GBDT's algorithm reference: https://blog.csdn.net/luanpeng825485697/article/details/79766455
Gradient boosting is a boosting method, and its main idea is that each time a model is established, the gradient descent direction of the model loss function is established. Loss function is the performance of evaluation model (generally fit degree + regular term), the smaller the loss function, the better performance. And let the loss function continue to decline, you can make the model continuously modified to improve performance, the best way is to make the loss function in the direction of the gradient down (reasonable gradient direction of the fastest descent). GBDT to construct new characteristic thought
The feature determines the upper bound of the model's performance, for example, the depth learning method is also characterized by how the data is better expressed. If the data can be expressed as linearly measurable data, a simple linear model can be used to achieve good results. GBDT building new features also enables features to better represent data.
Main idea: GBDT the characteristic combination represented by the path of each tree is used directly as the input feature of LR.
Using the existing characteristics training GBDT model, and then using the GBDT model to study the tree to construct the new features, and finally put these new features into the original characteristics of the training model. The new eigenvector is a value of 0/1, and each element of the vector corresponds to the leaf node of the tree in the GBDT model. When a sample point passes through a tree and eventually falls on a leaf node of the tree, the corresponding element value of the leaf node in the new eigenvector is 1, and the other leaf nodes of the tree correspond to the element value 0. The length of the new eigenvector is equal to the sum of the number of leaf nodes contained in all the trees in the GBDT model.
As shown above, suppose GBDT uses 2 decision trees as a weak learner. The first tree has 3 leaf nodes l11,l21,l31 L^1_1,l^2_1,l^3_1, the second tree has 2 leaf nodes l12,l22 l^1_2,l^2_2, then we generate a 5-dimensional new feature for the sample. If the sample belongs to Leaf 1 in the first tree and leaves 2 in the second tree, the value of the sample in the new feature is [1,0,0,0,1], there is a value on the 1th and 5th dimensions, and the other dimension has no value.
meaning: The leaf nodes in the decision tree represent a combination of attributes, which, because of their significance, can exist in the decision tree (otherwise it will be cut off). For example, the characteristics of advertising attributes (national, festive) in AD clicks (China-Spring Festival) and (American-Thanksgiving) are meaningful combinations of features. We extract all the meaningful combinations of features as a new feature, which is the purpose of GBDT to build new features.
Because of each meaningful combination of features, the significance of such a combination is unknown, so it is necessary to recreate the feature combination as a new feature to train the model. The meaning weights of each feature combination are trained, and then the multiple effective combinations of the samples are calculated.
Use the following examples to better understand some, such as now know a group of meaningful feature combinations. Wherein Xij X^i_j represents the desirable value of the first J feature for the first one.
feature combination |
combination effectivity |
X11 x^1_1, X12 x^1_2, x13 x^1_3 |
W1 w_1 |
x11 x^1_1, x32 x^3_2 |
W2 w_2 |
x21 x^2_1, x33 x^3_3 |
w3 w_3 |
x41 x^4_1, x62 x^6_2 |
|