Aggregation (1): Blending, Bagging, Random Forest

Source: Internet
Author: User

Suppose we have a lot of machine learning algorithms (which can be any one we've learned before), can we use them at the same time to improve the performance of our algorithms? Also namely: Three Stooges equals.

There are several ways to aggregation:

Some of the poor performance of machine learning algorithm (weak algorithm), how to aggregation, become a better performance algorithm? Take a look:

We can see that sometimes aggregation performance is like doing feature transform, and sometimes it's like doing regularization.

Blending:uniform Blending, Linear Blending, any Blending

We can see: The performance of machine learning algorithm A is divided into two parts, performance of consensus (bias) and expected deviation to consensus (variance). and uniform blending improve performance by reducing variance, to obtain a more stable algorithm to achieve.

where α is bound to be greater than 0, the binding conditions can be removed.

Bagging

We can see that aggregation works because of the diversity of machine learning algorithms. So, how do you generate enough machine learning algorithms? There are several scenarios. Now we will focus on: diversity by data randomness.

We have previously imagined this situation in uniform blending. However, it is in the ideal state, 1) Our t can not infinity, 2) Our d is not infinite, now we use the following techniques to solve:

Random Forest

What is the random forest? is a special case of bagging: G is the case of a decision tree .

Why is it? Before we said uniform blending is to improve the algorithm performance by reducing the variance and making the algorithm stable. And bagging is a special form of blending. And we know that decision trees are sensitive to data, and different data can cause huge changes in the algorithm. Bagging just can reduce variance.

So it can be said that the random forest is a special case of bagging, it can also be said that the random forest is to improve the decision tree performance (stability) and a strategy used .

So what are the so-called "bootstrap" steps? Generate a lot of "D"?

How many decision trees does it take? The author used 12000 trees in one match.

Out-of-bagging (OOB) technology

Bagging technology we talked about this before:

That is, for a certain G, nearly one-third of the data is not used! It's a huge waste! How do I use these OOB data?

Recall validation:

Feature Selection

Assuming that each sample has a lot of feature, there are many redundancy features, there are many features that are not related to the problem, how do we choose the features we want?

Aggregation (1): Blending, Bagging, Random Forest

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.