A Gentle Introduction to the Gradient boosting algorithm for machine learning

Source: Internet
Author: User
Tags xgboost

A Gentle Introduction to the Gradient boosting algorithm for machine learning by Jason Brownlee on September 9 in xgboost 0000

Gradient boosting is one of the most powerful techniques for building predictive models.

In this post you'll discover the gradient boosting machine learning algorithm and get a gentle introduction into where I T came from and how it works.

After the reading this post, you'll know:

    • The origin of boosting from learning theory and AdaBoost.
    • How gradient boosting works including the loss function, weak learners and the additive model.
    • How to improve performance through the base algorithm with various regularization schemes

Let ' s get started.

A Gentle Introduction to the Gradient boosting algorithm for machine learning
Photo by BRANDO.N, some rights reserved.

The algorithm is Winning competitions
... Xgboost for fast gradient boosting

Xgboost is the high performance implementation of gradient boosting so can now access directly in Python.

Your PDF Download and Email Course.

Free 7-day Mini-Course on
Xgboost with Python Download Your Free Mini-Course

Download your PDF containing all 7 lessons.

Daily lesson via email with tips and tricks.

The Origin of boosting

The idea of boosting came out of the an idea of whether a weak learner can is modified to become better.

Michael Kearns articulated the goal as the "hypothesis boosting problem" stating the goal from a practical STANDP Oint as:

Efficient algorithm for converting relatively poor hypotheses into very good hypotheses

-thoughts on hypothesis boosting [PDF], 1988

A weak hypothesis or weak learner is defined as one whose performance are at least slightly better than random chance.

These ideas built upon Leslie Valiant ' s work on distribution free or probability approximately Correct (PAC) learning, a Framework for investigating the complexity of machine learning problems.

Hypothesis boosting was the idea of filtering observations, leaving those observations that the weak learner can handle an D focusing on developing new weak learns to handle the remaining difficult observations.

The idea was to used the weak learning method several times to get a succession of hypotheses for each one refocused on the ex Amples that the previous ones found difficult and misclassified. ... Note, however, it is not obvious at all what this can was done

-probably approximately correct:nature ' s algorithms for learning and prospering in a Complex world, page 152, 2013

AdaBoost the first boosting algorithm

The first realization of boosting that saw great success in application is Adaptive boosting or AdaBoost for short.

Boosting refers to the problem of producing a very accurate prediction rule by combining rough and moderately ina Ccurate Rules-of-thumb.

-A decision-theoretic generalization of on-line learning and an application to boosting [PDF], 1995

The weak learners in AdaBoost is decision trees with a single split, called decision stumps for their shortness.

AdaBoost works by weighting the observations, putting more weight on difficult to classify instances and less on those ALR Eady handled well. New Weak learners is added sequentially that focus their training on the more difficult patterns.

This means, samples that is difficult to classify receive increasing larger weights until the algorithm identifies a Model that correctly classifies these samples

-applied Predictive Modeling, 2013

Predictions is made by majority vote of the weak learners ' predictions, weighted by their individual accuracy. The most successful form of the AdaBoost algorithm is for binary classification problems and is called ADABOOST.M1.

You can learn more on the AdaBoost algorithm in the post:

    • Boosting and AdaBoost for machine learning.
Generalization of AdaBoost as Gradient boosting

AdaBoost and related algorithms were recast in a statistical the framework first by Breiman calling them arcing algorithms.

Arcing is a acronym for Adaptive Reweighting and combining. Each step under an arcing algorithm consists of a weighted minimization followed by a recomputation of [the classifiers] and [Weighted input].

-prediction games and arching algorithms [PDF], 1997

This framework is further developed by Friedman and called Gradient boosting machines. Later called just gradient boosting or gradient tree boosting.

The statistical framework cast boosting as a numerical optimization problem where the objective is to minimize the loss of The model by adding weak learners using a gradient descent like procedure.

This class of algorithms were described as a stage-wise additive model. This is because one new weak learner was added at a time and existing weak learners in the model was frozen and left Unchan Ged.

Note that this stagewise strategy are different from stepwise approaches that readjust previously entered terms when new on Es is added.

-greedy Function approximation:a Gradient boosting machine [PDF], 1999

The generalization allowed arbitrary differentiable loss functions to be used, expanding the technique beyond binary class ification problems to support regression, Multi-Class classification and more.

How Gradient boosting Works

Gradient boosting involves three elements:

    1. A loss function to is optimized.
    2. A weak learner to make predictions.
    3. An additive model to add weak learners to minimize the loss function.
1. Loss Function

The loss function used depends on the type of problem being solved.

It must is differentiable, but the many standard loss functions is supported and you can define your own.

For example, regression if use a squared the error and classification may use logarithmic loss.

A benefit of the gradient boosting framework is, a new boosting algorithm does not having to being derived for each loss fu Nction that could want to be used, instead, it's a generic enough framework that any differentiable loss function can be us Ed.

2. Weak Learner

Decision trees is used as the weak learner in gradient boosting.

Specifically regression trees is used that output real values for splits and whose output can is added together, allowing Subsequent models outputs to is added and "correct" the residuals in the predictions.

Trees is constructed in a greedy manner, choosing the best split points based on purity scores like Gini or to minimize t He loss.

Initially, such as in the case of AdaBoost, very short decision trees were used that's had a single split, called a Dec Ision Stump. Larger trees can be used generally with 4-to-8 levels.

It is common to constrain the weak learners in specific ways, such as a maximum number of layers, nodes, splits or leaf no Des.

This was to ensure the learners remain weak, but can still being constructed in a greedy manner.

3. Additive Model

Trees is added one at a time, and existing Trees in the model is not changed.

A gradient descent procedure is used to minimize the loss when adding trees.

Traditionally, gradient descent is used to minimize a set of parameters, such as the coefficients in a regression equation Or weights in a neural network. After calculating error or loss, the weights is updated to minimize that error.

Instead of parameters, we have weak learner sub-models or more specifically decision trees. After calculating the loss, to perform the gradient descent procedure, we must add a tree to the model that reduces the LO SS (i.e follow the gradient). The parameterizing the tree, then modify the parameters of the tree and move in the right direction by (reducing The residual loss.

Generally this approach are called functional gradient descent or gradient descent with functions.

One-to-produce a weighted combination of classifiers which optimizes [the cost] are by gradient descent in function spa Ce

-boosting algorithms as Gradient descent in Function Space [PDF], 1999

The output for the new tree was then added to the output of the existing sequence of trees in a effort to correct or impro ve the final output of the model.

A fixed number of trees is added or training stops once loss reaches an acceptable level or no longer improves in an exte Rnal validation DataSet.

Improvements to Basic Gradient boosting

Gradient boosting is a greedy algorithm and can overfit a training dataset quickly.

It can benefit from regularization methods that penalize various parts of the algorithm and generally improve the Performa nCE of the algorithm by reducing overfitting.

In this section we'll look at 4 enhancements to basic gradient boosting:

    1. Tree Constraints
    2. Shrinkage
    3. Random Sampling
    4. Penalized learning
1. Tree Constraints

It is important that the weak learners has skill but remain weak.

There is a number of ways that the trees can be constrained.

A Good general heuristic are that the more constrained tree creation are, the more trees you'll need in the model, and the Reverse, where less constrained individual trees, the fewer trees that'll be required.

Below is some constraints that can is imposed on the construction of decision Trees:

    • Number of trees, generally adding more trees to the model can is very slow to overfit. The advice is to keep adding trees until no further improvement are observed.
    • Tree depth, deeper trees is more complex trees and shorter trees is preferred. Generally, better results is seen with 4-8 levels.
    • Number of nodes ornumber of leaves, like depth, this can constrain the size of the tree, but isn't constrained To a symmetrical structure if and constraints is used.
    • Number of observations per split imposes a minimum constraint on the amount of training data at a training node B Efore a split can be considered
    • Minimim improvement to loss are a constraint on the improvement of any split added to a tree.
2. Weighted Updates

The predictions of each tree is added together sequentially.

The contribution of each tree to this sum can is weighted to slow down the learning by the algorithm. This weighting was called a shrinkage or a learning rate.

Each update was simply scaled by the value of the ' learning rate parameter V '

-greedy Function approximation:a Gradient boosting machine [PDF], 1999

The effect is this learning is slowed down, in turn require more trees to being added to the model, in turn taking longer to Train, providing a configuration trade-off between the number of trees and learning rate.

Decreasing the value of V [the learning rate] increases the best value for M [the number of trees].

-greedy Function approximation:a Gradient boosting machine [PDF], 1999

It is common to has small values in the range of 0.1 to 0.3, as well as values less than 0.1.

Similar to a learning-stochastic optimization, shrinkage reduces the influence of each individual tree and leaves Space for future trees to improve the model.

-stochastic Gradient boosting [PDF], 1999

3. Stochastic Gradient Boosting

A big insight into bagging ensembles and random forest is allowing trees to being greedily created from Subsamples of the TR aining DataSet.

This same benefit can is used to reduce the correlation between the trees of the sequence in gradient boosting models.

This variation of boosting is called stochastic gradient boosting.

At each iteration a subsample of the training data are drawn at random (without replacement) from the full training dataset . The randomly selected subsample is then used, instead of the full sample, to fit the base learner.

-stochastic Gradient boosting [PDF], 1999

A few variants of stochastic boosting that can be used:

    • Subsample rows before creating each tree.
    • subsample columns before creating each tree
    • Subsample columns before considering each split.

Generally, aggressive sub-sampling such as selecting only 50% of the data have shown to be beneficial.

According to user feedback, using column sub-sampling prevents over-fitting even more so than the traditional row sub-samp Ling

-xgboost:a Scalable Tree boosting System, 2016

4. Penalized Gradient Boosting

Additional constraints can be imposed on the parameterized trees in addition to their structure.

Classical decision trees like CART was not used as weak learners, instead a modified form called a regression tree is used that have numeric values in the leaf nodes (also called terminal nodes). The values in the leaves of the trees can is called weights in some literature.

As such, the leaf weight values of the trees can be regularized using popular regularization functions, such as:

    • L1 regularization of weights.
    • L2 regularization of weights.

The additional regularization term helps to smooth, the final learnt weights to avoid over-fitting. Intuitively, the regularized objective would tend to select a model employing simple and predictive functions.

-xgboost:a Scalable Tree boosting System, 2016

Gradient boosting Resources

Gradient boosting is a fascinating algorithm and I am sure the want to go deeper.

This section lists various resources so you can use the Learn more about the gradient boosting algorithm.

Gradient boosting Videos
    • Gradient Boosting machine learning, Trevor Hastie, 2014
    • Gradient boosting, Alexander Ihler, 2012
    • GBM, John Mount, 2015
    • Learning:boosting, MIT 6.034 Artificial Intelligence, 2010
    • Xgboost:an R Package for Fast and accurate Gradient boosting, 2016
    • Xgboost:a Scalable Tree Boosting System, Tianqi Chen, 2016
Gradient boosting in textbooks
    • Section 8.2.3 boosting, page 321, a Introduction to statistical learning:with applications in R.
    • Section 8.6 Boosting, page 203, Applied Predictive Modeling.
    • Section 14.5 Stochastic Gradient boosting, page 390,applied predictive Modeling.
    • Section 16.4 boosting, page 556, machine learning:a probabilistic perspective
    • Chapter boosting and Additive Trees, page 337, the Elements of statistical learning:data Mining, inference, and Predic tion
Gradient boosting Papers
    • Thoughts on hypothesis boosting [PDF], Michael Kearns, 1988
    • A decision-theoretic generalization of on-line learning and an application to boosting [PDF], 1995
    • Arcing the edge [PDF], 1998
    • Stochastic Gradient boosting [PDF], 1999
    • Boosting algorithms as Gradient descent in Function Space [PDF], 1999
Gradient boosting Slides
    • Introduction to Boosted Trees, 2014
    • A Gentle Introduction to Gradient boosting, Cheng Li
Gradient boosting Web Pages
    • Boosting (machine learning)
    • Gradient boosting
    • Gradient Tree boosting in Scikit-learn

Want to systematically learn how to use Xgboost?

You can develop and evaluate xgboost models in just a few lines of Python code. You need:

>> Xgboost with Python

Take the next step with the self-study tutorial lessons.

Covers building large models on Amazon Web Services, feature importance, tree visualization, hyperparameter tuning, and Mu CH More ...

Ideal learning practitioners already familiar with the Python ecosystem.

Bring Xgboost to Your machine learning Projects

Summary

The This post is discovered the gradient boosting algorithm for predictive modeling on machine learning.

Specifically you learned:

    • The history of boosting in learning theory and AdaBoost.
    • How the gradient boosting algorithm works with a loss function, weak learners and an additive model.
    • How to improve the performance of gradient boosting with regularization.

Does questions about the gradient boosting algorithm or is this post? Ask your questions in the comments and I'll do my best to answer.

About Jason Brownleejason was the editor-in-chief at Machinelearningmastery.com. He is a husband, proud father, academic researcher, author, professional developer and a machine learning practitioner. He had a Masters and PhD in Artificial Intelligence, have published books on machine learning and have written operational C Ode that's running in production. Learn more. View all posts by Jason Brownlee→How to Tune the number and Size of decision Trees with Xgboost in PythonHow to Configure the Gradient boosting algorithm

A Gentle Introduction to the Gradient boosting algorithm for machine learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.