Random forest (initial)

Source: Internet
Author: User

In machine learning, a random forest is composed of many decision trees, because these decision trees adopt a random method, which is also called a random decision tree. There is no association between trees in the random forest. When the test data enters the random forest, each decision tree is used for classification, and the class with the most classification results in all decision trees is the final result. Therefore, a random forest is a classifier that contains multiple decision trees. The output category is determined by the mode of the classes output by individual trees. A random forest can process both discrete values, such as the ID3 algorithm and continuous values, such as The C4.5 algorithm. In addition, the random forest can also be used for unsupervised learning clustering and anomaly detection.

The decision tree algorithm has many good features, such as low training time complexity, fast prediction process, and easy model display (easy to make the decision tree into images. But at the same time, there are some bad aspects of a single decision tree, such as over-fitting, although there are some methods, such as pruning can reduce this situation, but it is still not enough.

The decision tree is actually a way to divide the space with a hyperplane.CurrentThe space is split into two parts, for example, the following decision tree:

Divide the space into the following:

In this wayEach leaf node is in a non-Intersecting area of the space.When making a decision, the sample falls into one of N regions based on the values of the feature values of each dimension of the input sample (assuming there are n leaf nodes)

Data Mining

Data Mining is all about automating the process of searching for patterns in the data.

Searching for high info gains

Given Something (e.g. Wealth) You are trying to predict, it is easy to ask the computer to find which attribute has highest information gain for it.

Base Cases
• Base case one: if all records in current data subset have
The same output then don't recurse
• Base case two: If all records have exactly the same set
Input attributes then don't recurse

Training set error
• For each record, follow the demo-tree to see what it wocould predict for what number of records does the demo-tree's prediction disagree with the true value in the database?
• This quantity is called the training set error.
The smaller the better.

Test Set Error
• Suppose we are forward thinking.
• We hide some data away when we learn the demo-tree.
• But once learned, we see how well the tree predicts that data.
• This is a good simulation of what happens when we try to predict future data.
• And it is called test set error.

Decision Tree Pruning

 

 

 

 

 

 

random forest (initial)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.