This article mainly refers to the official documents of OPENCV.
Http://docs.opencv.org/modules/ml/doc/boosting.html
The boosting algorithm is a supervised machine learning algorithm, which solves a two-tuple classification problem. This paper includes the understanding of the algorithm idea, and does not include the mathematical derivation of the algorithm.
Target detection is to use this classification algorithm, only contains the target image as a class, does not contain the target as another class, and then train the classifier, to detect the time, enter a picture, with a window to scan the picture, apply the classifier to each window, return is the target category, it detects the target.
Supervised machine learning objective is to find a function y=h (x), for an input x, give an output y,y if it is to take a few discrete values, known as the classification problem, y if take continuous value, become a regression problem. The function h (x) requires the sample data ((Xi,yi), i=1,2,... N,xi is a k-dimensional vector, which is an encoding of features, and the expression of H (x) is trained, and then a new x is computed using y=h (x) to predict Y. Typically a model is chosen for h (x), except that it contains parameters, and training means that the parameter is determined according to the sampled data, so that the model that is determined by this parameter can well characterize the function relationship between the input x and the output Y.
The concept of boosting (Ascension) provides a powerful solution to the problem of supervised classification learning. It obtains reliable classification results by combining the performance of many weak classifiers. A weak classifier is a classifier that recognizes a little better than a random guessing error rate; A strong classifier is a classifier that recognizes high accuracy and can be completed in polynomial time.
Boosting has many changes, Discreteadaboost, Real AdaBoost, Logitboost, and Gentle AdaBoost, this paper contains a detailed introduction to these algorithms/HTTP/ Web.stanford.edu/~hastie/papers/additivelogisticregression/alr.pdf
These four are similar, the following takes the Discreteadaboost algorithm as an example, introduces the algorithm flow
The first step is to collect the sample, encode the eigenvector for each sample, and mark the category of each sample.
The second step is to initialize the same weights for each sample 1/n
The third step is to train the weak classifier f_m (x) on the weighted sample set, then calculate the weighted training error err_m and the scale factor c_m, then update the weights, the weights of the samples that were incorrectly classified are promoted, and then the weight is normalized, and the process repeats M-1 times.
Fourth step, output the last classifier
From OPENCV official online, this sentence "Gentle AdaBoost and Gentle AdaBoost is often thepreferable choices." ", the description gentle adaboost and gentle adaboost are often preferable in these four algorithms.
OpenCV to these four algorithms have been implemented, for the application, do not have to repeat the wheel, care about the interface is good
As follows....