The recent use of GBRT and LR to solve regression problems, generally found that GBRT can quickly converge, and the error MSE is usually smaller than LR. However, in the process of using GBRT to return most of the regression value is close to the real value, but there will be some wrong very outrageous regression values, but LR to all of the regression sample can show the normal.
For example: If the problem is to assess the value of a Taobao store merchant, we need to use the history of the Taobao store last month PV,UV, click, Volume, evaluation, number of ratings, stars and other forecasts of its potential value next month. Suppose a shop whose
Pv |
Uv |
Click |
Trading volume |
Evaluation |
Number of comments |
Star |
Discount |
30000 |
40000 |
3666 |
8990 |
77 |
0 |
0 |
0 |
The shop is very pv,uv, the transaction is also very high, but the boss eclectic does not like to engage in the activity also closes the comment, does not have the star.
The return to GBRT:GBRT is also a tree, assuming that GBRT training samples are trained, the node of the first tree is the volume of the last month,
Obviously GBRT based on the sample training found the star low and inactive store value is not high, he does not care about your last month's deal far more than 500 ah ....
For LR, the weight hypothesis learned is 0.1 .... 0.1 (assuming) then the 0.1*8990 is much larger than the 0.1*500, can be the star activity of the low points back so that the sample returned there is a relatively high value.
Conclusion: Obviously the above example is only an individual exception in the model, but in practice, if your model has an outrageous value, the business party will obviously not let you go. This means that in the actual application environment, when the singular value of such a wide range of tolerance is zero, people would rather accept each sample almost can not accept a sample difference 10,000, and your sample usually because of a variety of reasons for missing values, the choice of LR model is better than the tree model.