Picture data: Convolution or kingly, there are a few more common framework has been replaced by people to change
Non-picture feature data: By Category:
Boost Series algorithm: Xgboost Framework Implementation
The AdaBoost algorithm trains the same basic classifier (weak classifier) for different training sets, and then sets up the classifiers that are obtained on different training sets to form a stronger final classifier (strong classifier). The theory proves that, as long as the classification ability of each weak classifier is better than the random guess, the error rate of the strong classifier tends to zero when the number tends to the infinite number. The different training sets in the AdaBoost algorithm are implemented by adjusting the corresponding weights of each sample. Initially, the weights for each sample are the same, and a basic classifier H1 (x) is trained under this sample distribution. For samples with H1 (x) errors, the weights of their corresponding samples are increased, and for samples with the correct classification, their weights are reduced. This allows the wrong sample to be highlighted and a new sample distribution is obtained. At the same time, according to the situation of the wrong points to give H1 (x) a weight, indicating the importance of the basic classifier, the less the wrong points the greater the weight. Under the new sample distribution, the basic classifier is trained again, and the basic classifier H2 (x) and its weight are obtained. And so on, through a cycle like T, we get the T basic classifier, and the corresponding weights of T. Finally, the T-base classifier is summed up according to certain weights, and the final expected strong classifier is obtained.
Xgboost, Extratrees, Gradientboost, and Randomforest classifiers
The routine of Kaggle competition