First of all, it should be noted that the Haar training feature extracted by opencv is the Haar feature (For details, refer to my other article about Haar features: http://blog.csdn.net/carson2005/article/details/8094699 ), classifier is a AdaBoost cascade classifier (if you need to know the Adaboost algorithm, please refer to my another article: http://blog.csdn.net/carson2005/article/details/8130557 ). The so-called cascade classifier is to concatenate several simple component classifiers (which can be understood as common classifier) in sequence to final detection classification results, only one effective detection classification result can be obtained through all the component classifiers in turn. Otherwise, we think there is no target in the current detection area.
To train a classifier using the Haar Training Program provided by opencv, follow these steps:
(1) collect training samples:
The training samples include positive and negative samples. The positive sample, in layman's terms, is the only target you need in the image. Negative sample images only need to contain no targets. However, it should be noted that the negative sample is not randomly selected. For example, if you want to detect a car, the positive sample should contain only pictures of the car, while the negative sample obviously cannot contain pictures of the sky, ocean, and scenery. Because the purpose of your final training classifier is to detect cars, and cars should appear on the road. That is to say, the final detection images of classifier should be those that contain roads, traffic signs, buildings, billboards, cars, motorcycles, tricycles, pedestrians, bicycles, etc. Obviously, the negative samples here should include motorcycles, tricycles, bicycles, pedestrians, roads, bushes, flowers and plants, traffic signs, and billboards.
In addition, it should be noted that the AdaBoost method is also a classic algorithm in machine learning. The premise of the machine learning algorithm is that the test samples and training samples are independently distributed. The so-called independent distribution can be simply understood as: The training sample should be very similar to or consistent with the final application scenario. Otherwise, machine learning algorithms cannot guarantee the effectiveness of algorithms. In addition, sufficient training samples (at least thousands of positive samples and thousands of negative samples) are a prerequisite for ensuring the effectiveness of the training algorithm.
Assume that all positive samples are placed in the F:/POS folder, and all negative samples are placed in the F:/neg folder;
(2) Normalize the size of all positive samples:
The positive sample collected in the previous step has a lot of sizes, some are 200*300, some are 500*800... the purpose of size normalization is to scale all the images to the same size. For example, all are scaled to 50*60.
(3) generate a positive sample description file:
The so-called positive sample description file is actually a text file, but many people prefer to change the suffix of this file to. dat. The content in the positive sample description file includes the position (X, Y, width, and height) of the object in the image)
A typical positive sample description file is as follows:
0. jpg 1 0 0 30 40
1. jpg 1 0 0 30 40
2. jpg 1 0 0 30 40
.....
It is not difficult to find that in the positive sample description file, each positive sample occupies one row, each row starts with a positive sample image, followed by the number of positive samples in the image (usually 1 ), and the position of the positive sample in the image.
Assume that the F: \ pos folder contains 5000 positive sample images, each of which has only one target. Then, we can write a program (traverse all the image files in the folder, write the file name to the file, and write the position and size of the positive sample in the image into the file) to generate a pos. the DAT file is used as the description file of the positive sample.
(4) create a positive sample VEC File
Because the positive sample to be input during haartraining training is a VEC file, you need to use the createsamples program to convert the positive sample To a vec file.
Open the executable program named createsamples in the bin folder under the opencv installation directory (the new version of opencv is renamed opencv_createsamples. It should be noted that the program should be started through the command line (refer to my other blog: http://blog.csdn.net/carson2005/article/details/6704589 ). Set the path of the positive sample and the path for saving the generated positive sample file (for example, F: \ pos. VEC ).
The command line parameters of the createsamples program:
Command line parameters:
-VEC <vec_file_name>
Output file name of the trained positive sample.
-IMG <image_file_name>
Source Target Image (for example, a company icon)
-BG <background_file_name>
Background description file.
-Num <number_of_samples>
The number of positive samples to be produced is the same as the number of positive images.
-Bgcolor <background_color>
Background color (assuming that the current image is a grayscale image ). The background color is transparent. For compressed images, the color variance is specified by the bgthresh parameter. The pixels between bgcolor-bgthresh and bgcolor + bgthresh are considered transparent.
-Bgthresh <background_color_threshold>
-Inv
If specified, the color will be reversed.
-Randinv
If specified, the color will be reversed.
-Maxidev <max_intensity_deviation>
The maximum deviation of the background color.
-Maxangel <max_x_rotation_angle>
-Maxangle <max_y_rotation_angle>,
-Maxzangle <max_x_rotation_angle>
The maximum rotation angle, in radians.
-Show
If this parameter is specified, each sample is displayed. Pressing "ESC" will disable this function, that is, the sample image is not displayed, And the creation process continues. This is a useful debug option.
-W <sample_width>
Width of the output sample (in pixels)
-H sample_height
The height of the output sample, in pixels.
(5) create a negative sample description file
Generate a negative sample description file in the folder where the negative sample is saved. The specific steps are the same as (3). We will not go into details here;
(6) conduct sample training
This step is completed by calling the haartraining program under the opencv \ bin directory (the new version of opencv is renamed opencv_haartraining. The command line parameter of haartraining is:
-Data <dir_name>
Stores the path name of the trained classifier.
-VEC <vec_file_name>
Positive sample file name (created by the trainingssamples program or by other methods)
-BG <background_file_name>
Background description file.
-NPOs <number_of_positive_samples>,
-Nneg <number_of_negative_samples>
A positive/negative sample used to train each classifier stage. Reasonable Value: NPOs = 7000; nneg = 3000
-Nstages <number_of_stages>
Number of layers of the cascade classifier for training.
-Nsplits <number_of_splits>
Determines the weak Classifier Used for the phase classifier. If 1, a simple stump classifier is used. If it is 2 or more, the cart classifier with number_of_splits internal nodes is used.
-MEM <memory_in_mb>
Pre-calculated available memory in MB. The larger the memory, the faster the training speed.
-Sym (default)
-Nonsym
Specifies whether the target object for training is vertically symmetric. Vertical symmetry increases the training speed of the target. For example, the front is vertical symmetric.
-Minhitrate: min_hit_rate
The minimum hit rate required by each stage classifier. The total hit rate is the number_of_stages power of min_hit_rate.
-Maxfalsealarm <max_false_alarm_rate>
The maximum error alarm rate for a stage classifier. The total error warning rate is the number_of_stages power of max_false_alarm_rate.
-Weighttrimming <weight_trimming>
Specifies whether or not to use the permission correction and how much to use the permission correction. A basic choice is 0.9
-Eqw
-Mode <Basic (default) | core | all>
Select the type of the Haar feature set used for training. Basic only uses vertical features. All uses vertical and 45-degree rotation features.
-W sample_width
-H sample_height
The size of the training sample, in pixels ). The size must be the same as that created by the training sample.
An example of training a classifier:
"D: \ Program Files \ opencv \ bin \ haartraining.exe"-data \ cascade-VEC data \ pos. vec-BG negdata \ negdata. dat-NPOs 49-nneg 49-MEM 200-mode all-W 20-H 20
After training, some subdirectories are generated under the directory data, that is, the trained classifier.
(7) generate an XML file
When performing haartraining in the previous step, some directories and TXT files will be generated under the Data Directory. We need to call opencv \ bin \ haarconv.exe to convert these TXT files to XML files, that is, the so-called classifier.
So far, the classifier training has been completed. The rest is to load the XML file in the program and call the corresponding function interface to implement classification detection.