Machine learning-classifier-cascade classifier Training (Train cascadeclassifier)

Source: Internet
Author: User
Tags file info transparent color

First, Introduction:

The AdaBoost classifier is composed of Cascade classifiers, which means that the final classifier is composed of several simple classifiers cascade. In the image detection, the inspection window through each level classifier, so that in the previous layer of detection in most of the candidate areas are excluded, all through the detection of each level of the area is the target area.

Once the classifier has been trained, it can be applied to the detection of the area of interest in the input image. The target area output is detected as 1, otherwise the output is 0. To detect the entire image, move the search window in the image and detect each location to determine the possible targets. In order to search for target objects of different sizes, the target objects of unknown size are detected in the image, and the images are scanned with different proportions of the search window during the scanning process.

Target detection is divided into three steps:

    1 , the creation    of samples 2 , training classifier     3, using a well-trained classifier for target detection.

Ii. Collection of samples

The training sample is divided into a positive sample and a counter example sample, in which the sample is the sample to be checked, the inverse sample refers to any other image, all the sample images are normalized to the same size (for example, 24x24). Negative samples can come from any image, but these images cannot contain target features. A negative sample is described by a background description file.

III. Preparation of training data

Training requires some column samples. The samples are divided into two categories: negative samples and positive samples. Negative samples refer to images that do not include objects. The positive sample is the image of the object to be detected. Negative samples must be prepared manually, and positive samples are created using opencv_createsamples .

1. Negative samples

A negative sample can be any image, but the image cannot contain the object to be detected. The image file name for keying negative samples is listed in a file. This file is a plain text file, each line is a file name (including relative directories and file names). Negative samples and sample images are also called background samples, or background sample images, which are not distinguished in this document. These images can be of different sizes, but the image size should be larger than the size of the training window, as these images will be used to pick up negative samples and shrink negative samples to the training window size.

Here is an example of a description file:

If the directory structure is as follows:

/IMG  img1.jpg  img2.jpgbg.txt

The contents of the Bg.txt file will resemble the following:

Img/img1.jpgimg/img2.jpg

2. Positive sample

A positive sample is generated by opencv_createsamples . A positive sample can be generated by a picture containing the object to be detected, or by a series of tagged images.

Please note that you need a large negative sample library to send to the training program for training. If the object is absolutely rigid, such as the OPENCV logo, you only have a positive sample image; If it is a human face, you need hundreds of or even thousands of positive samples. In cases where the object being detected is a human face, you need to consider the patterns of all races, ages, expressions and even beards.

If there is only one image that contains an object, such as a company's logo, a large number of positive samples can be obtained by randomly rotating the object's image, changing the brightness of the flag, and placing the marker on any background. The number of positive samples generated and the degree of randomness can be controlled by opencv_createsamples command-line parameters.

    

Command-line arguments:

  • -vec<vec_file_name>

    The output file contains a positive sample for training.

  • -img<image_file_name>

    Enter the image file name (for example, a company's logo).

  • -BG<background_file_name>

    A description file of the background image, which contains a series of image filenames that will be randomly selected as the object's background.

  • -num<number_of_samples>

    The number of positive samples generated.

  • -bgcolor<background_color>

    Background color (currently grayscale); background color indicates transparent color. Because image compression can cause color deviations, the color tolerance can be specified by -bgthresh . All pixels between Bgcolor-bgthresh and Bgcolor+bgthresh are set to transparent pixels.

  • -bgthresh<background_color_threshold>

  • -inv

    If this flag is specified, the color of the foreground image is flipped.

  • -randinv

    If this flag is specified, the color is flipped randomly.

  • -maxidev<max_intensity_deviation>

    The maximum value of the luminance gradient of the pixel in the foreground sample.

  • -maxxangle<max_x_rotation_angle>

    The maximum rotation angle of the x-axis, which must be in radians.

  • -maxyangle<max_y_rotation_angle>

    The maximum rotation angle of the y-axis, which must be in radians.

  • -maxzangle<max_z_rotation_angle>

    The maximum rotation angle of the z-axis, which must be in radians.

  • -show

    Useful debugging options. If this option is specified, each swatch is displayed. If you press the ESC key, the program continues to create the sample but no longer appears.

  • W<sample_width>

    The width, in pixels, of the output sample.

  • -H<sample_height>

    The height, in pixels, of the output sample.

The

process for creating a sample is as follows: The input image rotates randomly along three axes. The angle of rotation is defined by  -max?angle  . Then the luminance value of the pixel is located in [bg_color-bg_color_threshold ; bg_color+bg_color_threshold ] The range of pixels is set to transparent pixels. Add white noise to the foreground image. If you specify -INV   , the color of the foreground image will be flipped. If -RANDINV -w   and -h   the specified size. Finally, the image is deposited into the Vec file, the VEC filename is -vec   specified.

Positive samples can also be created from a series of pre-tagged images. Tag information can be stored in a text file, similar to a background description file. Each line in the file corresponds to an image file. The first element of each row is the image file name, followed by the number of objects, and finally the description of the object's position and size (x, y, width, height).

Here is an example of a description file:

Suppose the directory structure is as follows:

/IMG  img_with_faces_1.jpg  img_with_faces_2.jpginfo.dat

The contents of the file Info.dat are as follows:

Img/img_with_faces_1.jpg  1  45img/img_with_faces_2.jpg  2   50 30 25 25

The image img_with_faces_1.jpg contains an instance of an object (such as a face) that indicates its position and size in the image as a rectangle (140, 100, 45, 45). The image img_with_faces_2.jpg contains two instances of the object.

To create a positive sample from such a series of data, you need to specify -info instead of the -img parameter used earlier on the command line:

    • -info<collection_file_name>

      A description that describes the image of the object and the location of the size.

This part of the sample creation process is as follows: The object instance is keyed out from the image, then resized to the target size, and then saved to the output VEC file. The image is not deformed during this process, so valid command line arguments are only w,-H,-show , and -num .

Opencv_createsamples can also be used to view and inspect positive samples stored in the VEC positive sample file. At this point, you only need to specify -vec ,-w and- H three parameters. Opencv_createsamples the positive sample image is displayed individually.

In training, the training program does not care about how the Vec file containing the positive sample is generated, and you can write your own program to generate the Vec file. However, in the tools provided by OpenCV, only the opencv_createsamples program can create a VEC file containing a positive sample.

An example of a VEC file is located in Opencv/data/vec_files/trainingfaces_24-24.vec . It can be used to train the face classifier with the window size:-W and-h .

III. Training Cascade Classifier

  

The next step is to train the classifier. As mentioned earlier, both Opencv_traincascade and opencv_haartraining can be used to train a cascade classifier, but this is only described here opencv_ Traincascade . opencv_haartraining the usage is similar to opencv_traincascade .

The following are the command-line arguments for Opencv_traincascade, which are grouped by purpose:

  1. General parameters:

    • -data<cascade_dir_name>

      A directory name, such as a non-existent training program, will create it for storing a well-trained classifier.

    • -vec<vec_file_name>

      The VEC file name containing the positive sample (generated by the opencv_createsamples program).

    • -BG<background_file_name>

      The background description file, which is the description file that contains the negative sample file name.

    • -numpos<number_of_positive_samples>

      The number of positive samples used for each level of classifier training.

    • -numneg<number_of_negative_samples>

      The number of negative samples used for each level of classifier training can be greater than the number of pictures specified by-BG.

    • -numstages<number_of_stages>

      The progression of the classifier to be trained.

    • -precalcvalbufsize<precalculated_vals_buffer_size_in_Mb>

      Cache size, used to store pre-computed eigenvalues (feature values) in megabytes.

    • -precalcidxbufsize<precalculated_idxs_buffer_size_in_Mb>

      The cache size used to store the pre-computed feature index (feature indices) in megabytes. The larger the memory, the shorter the training time.

    • -baseformatsave

      This parameter is only valid when using the Haar feature. If this parameter is specified, then the Cascade classifier will be stored in the old format.

  2. Cascading parameters:

    • -stagetype<boost (default) >

      Level (stage) parameter. Only the boost classifier is currently supported as a level type.

    • -featuretype<{haar (default),lbp}>

      Type of feature: HAAR -class HAAR feature;LBP -local texture pattern feature.

    • W<sampleWidth>

    • -H<sampleHeight>

      The size of the training sample (in pixels). Must be consistent with the dimensions of the training sample creation (created with the opencv_createsamples program).

  3. Boosted classifier parameters:

    • -bt<{dab, rab,lb,gab (default)}>

      Boosted classifier type:  dab  -discrete adaboost,RAB  -Real adaboost, lb  -logitboost,&NBSP;gab  -Gentle AdaBoost.

    • -minhitrate<min_hit_rate>

      The minimum rate at which each level of the classifier wants to be detected. The total detection rate is approximately min_hit_rate^number_of_stages.

    • -maxfalsealarmrate<max_false_alarm_rate>

      The maximum number of false detection rates that the classifier expects to receive at each level. The total false detection rate is about max_false_alarm_rate^number_of_stages.

    • -weighttrimrate<weight_trim_rate>

      Specifies whether trimming should is used and its weight. A good value is 0.95.

    • -maxdepth<max_depth_of_weak_tree>

      The maximum depth of the weak classifier tree. A good value is 1, which is a two-fork tree (stumps).

    • -maxweakcount<max_weak_tree_count>

      The maximum number of weak classifiers in each level. The boosted classifier (stage) would have so many weak trees (<=maxweakcount), as needed to achieve the given -maxfalsealarmrate.

  4. Class Haar feature Parameters:

    • -mode<basic(default)| CORE| all>

      Select the type of Haar feature used in the training process. BASIC use only the upper right feature , all using all upper right features and 45 degree rotation features. For more details, please refer to [Rainer2002] .

  5. LBP characteristic parameters:

    LBP features no parameters.

When the opencv_traincascade program is finished, the well-trained cascade classifier is stored in the file Cascade.xml, which is located in the directory specified by-data. The other files in this directory are intermediate results of training, and when the training program is interrupted, re-running the training program will read the previous training results without having to retrain from scratch. After the training is over, you can delete the intermediate files.

Once the training is over, you can test the well-trained cascade classifier.

Machine learning-classifier-cascade classifier Training (Train cascadeclassifier)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.