I. Out of bag estimate (OOB)
1. OOB sample number
RF is a method of sending bagging. When there is a put back Bootstrap, it can be obtained from random sampling (1/e can be obtained from the lobda Law in the high number ):
In RF, N samples are sampled each time to train each demo-tree (GT). For this tree GT, the original dataset will have nearly 1/E (33.3%) the sample does not participate in its training; therefore, you can use this part of the data to validate the tree GT. (Red asterisks indicate that this sample is not trained in a decision tree Gt)
2. How to validate OOB
RF is a combination of multiple basic learning devices to obtain the overall performance, that is, the performance of the Base Learning device is not high, but after the combination, it is still possible to get high performance (three stinkers, competition Zhuge Liang ); therefore, for RF, the overall performance should be validated, rather than a single base learner in turn. So how can we validate a trained RF?
Specific Practices:
(1) For each sample (XI, Yi) in the dataset) the training decision tree is filtered out to obtain several sub-model sets that contain decision trees;
(2) then, on the sample (XI, Yi), the model set m is used for error calculation, and the result of sample (XI, Yi) validation is obtained; (This step is similar to leaving one for validation)
(3) Repeat Step (2) and calculate the result of using each sample for validation in the remaining samples in sequence;
(4) Calculate the sum and average of the results of validation for all samples as the final validation result of the RF model.
The specific expression is as follows:
Features: no extra validation set is required. RF can perform self-validation.
3. How does RF implement feature selection?
Randomforest's out of bag estimate and Feature Selection Practices