To detect any object in an image, you must obtain something to separate the sum of other objects. This kind of thing is called feature in image processing. I think we should consider the feature type. For example, the color, gray value, texture, and contour are actually feature vectors at the image processing level, that is, the mathematical expression of these features, such as texture, it can be expressed by distance. Since features can distinguish objects from other things, using Adaboost is a good way to organize them.
AdaBoost is short for adaptive boost. He is a classifier. His idea is that if something is inseparable from one feature, then several features can be combined to obtain a gradually enhanced classifier. Therefore, AdaBoost is composed of several weak classifier groups. People seem to have some questions. Just now, we talked about features. How can we talk about classifier? What are their relationships and differences? For example, the feature is like pork, and the classifier is like a method for making pork. If there is good pork, sometimes it doesn't need to be too strong or refreshing, but if the pork is the same, it depends on who is doing well.
We all know about the features of Haar wavelet. That feature is called "pork. However, in the forward development, we found that if we combine other features of an object, we should get a better classifier. That is, to improve the quality of pork. The texture and edge mentioned just now are also features. It would be a good way to integrate them into the classifier. For example, an object contour can be obtained as a feature. Therefore, we can obtain good detection results. Here we will give you a brief introduction. I will introduce it in detail later. What is a weak classifier? I think he is a simple classification method. This method can be simple, or weak, and its meaning is that his classification capability is not very strong. Generally, for example, A linear classifier is enough. So why must we have a weak one? Can you use SVM or other non-linear features? In this case, we have done the same thing, but I think in actual application, we need to consider the speed and complexity. If we can meet the requirements, why bother?
Then, let's take a look at how to combine a weak classifier into a strong classifier? Simply put, it is a linear combination of T weak classifiers =. I think this process is the best process. To achieve the optimal classification accuracy. The optimal accuracy is the optimal weight for each sample. This weight corresponds to each sample, indicating the easy classification of the sample. The easier it is to classify, the lower the weight, and the more difficult it is to classify, the higher the weight. For example, an object sample and a non-object sample have k dimensions for features. If each dimension has a weak classifier. The T-cycle is used to obtain a feature each time, so that the classifier corresponding to this feature has the highest classification accuracy rate. This capability is the weight of the classifier in the group and into the strong classifier. In addition, each sample has a weight that is easily classified and will be updated after each cycle. To shift the classifier's center of gravity from easily classified samples to less easily classified samples. T this cycle generates t weak classifiers, and their combination of weights is a strong classifier.
Next, I want to talk about the features selected and the corresponding weak classifier that can solve the problems of rotation, scaling, and angle change in object detection...
(Reprinted:Yxgeng blogHttp://www.china-vision.net/blog/user2/15769/20091719459.html)