Bootstrap aggregating Bagging ensemble Ensemble neural Network

Source: Internet
Author: User

Zh.wikipedia.org/wiki/bagging algorithm

Bagging Algorithm (English:Bootstrap aggregating, bootstrap aggregation algorithm), also known as bagging algorithm , is a kind of machine learning field Group learning algorithms. Originally presented by Leo Breiman in 1994. The bagging algorithm can be combined with other classification and regression algorithms to improve its accuracy and stability, and to avoid the occurrence of overfitting by reducing the variance of the results.

given a training set of size , the bagging algorithm selects a subset of sizes from a uniform, back-up (i.e., self-service sampling method) as a new training set. In this training set using the classification, regression and other algorithms, you can get {\displaystyle m} models, and then by averaging, take the majority of votes, etc., you can get the results of bagging.

< Span class= "Mwe-math-mathml-inline mwe-math-mathml-a11y" > http://machine-learning.martinsewell.com/ensembles/bagging/

"Bootstrap samples < Span class= "Mwe-math-mathml-inline mwe-math-mathml-a11y" > Put back sampling random samples with replacement "

Bagging   (Breiman, 1996), a name derived from "Bootstrap aggregation", is the first effective method of ensemble learning an D is one of the simplest methods of arching [1]. The Meta-algorithm, which is a special case of the model averaging, were originally designed for classification and is Usua Lly applied to decision tree models, but it can is used with any type of the model for classification or regression. The method uses multiple versions of a training set by using the bootstrap, i.e. sampling with replacement. Each of the these data sets is used to train a different model. The outputs of the models is combined by averaging (in case of regression) or voting (in case of classification) to creat e a single output. Bagging is only effective when using unstable (i.e. a small change in the training set can cause a significant change in t He model) nonlinear models.

https://www.packtpub.com/mapt/book/big_data_and_business_intelligence/9781787128576/7/ch07lvl1sec46/ Bagging--building-an-ensemble-of-classifiers-from-bootstrap-samples

Bagging is a ensemble learning technique that's closely related to the so MajorityVoteClassifier we implemented in the previous section , as illustrated in the following diagram:

However, instead of using the same training set to fit the individual classifiers in the ensemble, we draw bootstrap Sampl ES (random samples with replacement) from the initial training set, which are why bagging is also known as Bootstra P aggregating. To provide a more concrete example of what bootstrapping works, let's consider the example shown in the following figure. Here, we had seven different training instances (denoted as indices 1-7) that is sampled randomly with replacement in EA CH Round of bagging. Each of the bootstrap sample is and used to fit a classifier, which are most typically an unpruned decision tree:

"lowess (locally weighted scatterplot smoothing) Local scatter-weighted smoothing"

Loess and lowess thus build on "classical" methods, such as linear and nonlinear least squares regression. They address situations in which the classical procedures does not perform well or cannot be effectively applied without und UE Labor. Loess combines much of the simplicity of linear least squares regression with the flexibility of nonlinear regression. It does this by fitting simple models to localized subsets of the data to build up a function that describes the Determini Stic part of the variation in the data, point by point. In fact, one of the chief attractions of this method is and the data analyst is not required to specify a global function of any form to fit a model to the data, only to fit segments of the data.

"Using local data to fit local points by point--without global function fitting model--local problem solving"

http://www.richardafolabi.com/blog/ Non-technical-introduction-to-random-forest-and-gradient-boosting-in-machine-learning.html

"A collective wisdom of many is likely more accurate than any one. Wisdom of the Crowd–aristotle, 300bc-"

Bagging

Gradient boosting

    • Ensemble model is great for producing robust, highly optimized and improved models.
    • Random Forest and Gradient boosting is ensembled-based algorithms
    • Random Forest uses Bagging technique while Gradient boosting uses boosting technique.
    • Bagging uses multiple random data sampling for modeling while boosting uses iterative refinement for modeling.
    • Ensemble models is not easy to interpret and they often work like a little back box.
    • Multiple algorithms must is minimally used to, the prediction system can be reasonably tractable.

Bootstrap aggregating Bagging ensemble Ensemble neural Network

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.