??
Introduction
??
Random forest in the field of machine learning not mentioned, I was from a Berkeley university called Breiman 's home page to see the relevant information, this Breiman seems to be the proposed random forest algorithm, the URL is as follows
??
Http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm
??
Introduction to Random forest algorithms
??
Random Forest White is a lot of decision trees together, forming a forest, the key is how to create every tree in the forest, random forest Method bootstrap method, popular is to put back the sample
??
Here is a theoretical basis here, it shows that there is a return of the extraction method about One-third of the sample will not be extracted, here I briefly say this reason
??
Why One-third of samples are not extracted
??
??
When n is large enough, it converges to 1/e≈0.368, which indicates that samples close to 37% of the original sample set D do not appear in the bootstrap sample, which is called out-of-pocket (Out-of-bag,oob) data. Use this data to estimate the performance of the model as an OOB estimate
??
Building a decision Tree
??
In addition to randomly extracting samples, it is also necessary to randomly sample the characteristics of a decision tree, such as a total of one of the features of the sample, we randomly extract the ten -dimensional features to build a decision tree
??
If we want to build a random forest of a decision tree, we have to randomly extract the three times, each time we extract a ten -dimensional feature to build a decision tree
??
The sample used to build a decision tree also needs to be randomly extracted using the bootstrap method that was previously said, and simply describe the process of generating a sample set for building a decision tree.
??
Sample process extraction by bootstrap
??
Randomly extract 1 samples, then let back, and then randomly extract 1 samples, so extract n times, you can get n samples of the dataset, using the data set of n samples, According to the previously randomly selected ten -dimensional feature, we traverse the ten -dimensional feature, divide the data set, and get a decision tree
??
If we want to construct a tree, we need to randomly extract the ten -dimensional features, and randomly extract N samples of ten times.
??
Random Forest classification
??
The random forest used in random forest to test the classification results of random forest, and the bootstrap method will cause about One-third of the data will not be sampled. This part of the data is called the OOB data, this part of the data into the forest, each tree will be the corresponding data to obtain a classification result, then the final result will be determined according to the vote
??
Why not cross-validate method two with Oob method
??
One problem is that this OOB approach is different from random sampling in cross-validation, such as 10 cross-validation, which divides the data set evenly into ten parts, randomly selecting 9 of them for training,1 As a test, repeat 10 times, averaging, so what's the difference between OOB and cross-validation?
??
A very important difference according to the author's argument is that the computational amount, using cross-validation (CV) to estimate the generalization error of the combinatorial classifier, can lead to a large amount of computation, thus reducing the efficiency of the algorithm, and using OOB data to estimate the generalization error of the combinatorial classifier, we can construct each decision tree and calculate the OOB error
The difference rate, which can be obtained only by adding a small amount of computation at last. Compared to cross-validation, 00B estimation is efficient and results approximate to cross-validation results
??
Machine Learning: Random forest