In machine learning, a random forest is composed of many decision trees, because these decision trees adopt a random method, which is also called a random decision tree. There is no association between trees in the random forest. When the test data enters the random forest, each decision tree is used for classification, and the class with the most classification results in all decision trees is the final result. Therefore, a random forest is a classifier that contains multiple decision trees. The output category is determined by the mode of the classes output by individual trees. A random forest can process both discrete values, such as the ID3 algorithm and continuous values, such as The C4.5 algorithm. In addition, the random forest can also be used for unsupervised learning clustering and anomaly detection.
The decision tree algorithm has many good features, such as low training time complexity, fast prediction process, and easy model display (easy to make the decision tree into images. But at the same time, there are some bad aspects of a single decision tree, such as over-fitting, although there are some methods, such as pruning can reduce this situation, but it is still not enough.
The decision tree is actually a way to divide the space with a hyperplane.CurrentThe space is split into two parts, for example, the following decision tree:
Divide the space into the following:
In this wayEach leaf node is in a non-Intersecting area of the space.When making a decision, the sample falls into one of N regions based on the values of the feature values of each dimension of the input sample (assuming there are n leaf nodes)
Data Mining
Data Mining is all about automating the process of searching for patterns in the data.
Searching for high info gains
Given Something (e.g. Wealth) You are trying to predict, it is easy to ask the computer to find which attribute has highest information gain for it.
Base Cases
• Base case one: if all records in current data subset have
The same output then don't recurse
• Base case two: If all records have exactly the same set
Input attributes then don't recurse
Training set error
• For each record, follow the demo-tree to see what it wocould predict for what number of records does the demo-tree's prediction disagree with the true value in the database?
• This quantity is called the training set error.
The smaller the better.
Test Set Error
• Suppose we are forward thinking.
• We hide some data away when we learn the demo-tree.
• But once learned, we see how well the tree predicts that data.
• This is a good simulation of what happens when we try to predict future data.
• And it is called test set error.
Decision Tree Pruning
random forest (initial)