The idea of boosting is to integrate learning and combine many weak classifiers to form a strong classifier.
First enter the original training sample, get a weak classifier, you can know its correct rate and error rate. Calculate the weight of the weak classifier as follows:
Then increase the weight of the error classification sample, let the following classifier focus them, adjust the weight of the sample:
If the original classification is correct:
If the sample classification error:
The new sample input to the back to learn, repeat the process, get a lot of weak classifiers, and its classifier weight.
Note that there are two weights in the boosting algorithm, one is the weight of the classifier, and the other is the weight of the sample.
The advantages of boosting algorithm: good performance, can prevent overfitting, can synthesize multiple classifiers advantages.
Cons: more sensitive to outliers.
Review machine learning algorithm: boosting