Object Recognition and scene understanding (5) peopledetect in opencv

Source: Internet
Author: User
Tags svm

After opencv2, hog-related content was added and an example was provided, using the method first proposed by French navneet Dalal at cvpr2005.

Firstly, hog is used to perform people detection. A complete method has been provided. In peopledetect. CPP, the main methods include hog feature extraction, training, and recognition. You can use
Hog. setsvmdetector (hogdescriptor: getdefaultpeopledetector (); Use a trained model for direct detection. Use hog. detectmultiscale (...) for detection.

 

1. program description
#include "opencv2/imgproc/imgproc.hpp"#include "opencv2/objdetect/objdetect.hpp"#include "opencv2/highgui/highgui.hpp"#include <stdio.h>#include <string.h>#include <ctype.h>using namespace cv;using namespace std;//const char* image_filename = "people.jpg";const char* image_filename = "./../2.jpg";void help(){printf("\nDemonstrate the use of the HoG descriptor using\n""  HOGDescriptor::hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());\n""Usage:\n""./peopledetect (<image_filename> | <image_list>.txt)\n\n");}int main(int argc, char** argv){    Mat img;    FILE* f = 0;    char _filename[1024];    if( argc == 1 )    {        printf("Usage: peopledetect (<image_filename> | <image_list>.txt)\n");        //return 0;    }if (argc >1){image_filename = argv[1];}    img = imread(image_filename);if (!img.data){printf( "Unable to load the image\n"                "Pass it as the first parameter: hogpeopledetect <path to people.jpg> \n" );return -1;}    if( img.data )    {    strcpy(_filename, image_filename);    }    else    {        f = fopen(argv[1], "rt");        if(!f)        {    fprintf( stderr, "ERROR: the specified file could not be loaded\n");    return -1;    }    }    HOGDescriptor hog;    hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());    namedWindow("people detector", 1);    for(;;)    {    char* filename = _filename;    if(f)    {    if(!fgets(filename, (int)sizeof(_filename)-2, f))    break;    //while(*filename && isspace(*filename))    //++filename;    if(filename[0] == '#')    continue;    int l = strlen(filename);    while(l > 0 && isspace(filename[l-1]))    --l;    filename[l] = '\0';    img = imread(filename);    }    printf("%s:\n", filename);    if(!img.data)    continue;    fflush(stdout);    vector<Rect> found, found_filtered;    double t = (double)getTickCount();    // run the detector with default parameters. to get a higher hit-rate    // (and more false alarms, respectively), decrease the hitThreshold and    // groupThreshold (set groupThreshold to 0 to turn off the grouping completely).    hog.detectMultiScale(img, found, 0, Size(8,8), Size(32,32), 1.05, 2);    t = (double)getTickCount() - t;    printf("tdetection time = %gms\n", t*1000./cv::getTickFrequency());    size_t i, j;    for( i = 0; i < found.size(); i++ )    {    Rect r = found[i];    for( j = 0; j < found.size(); j++ )    if( j != i && (r & found[j]) == r)    break;    if( j == found.size() )    found_filtered.push_back(r);    }    for( i = 0; i < found_filtered.size(); i++ )    {    Rect r = found_filtered[i];    // the HOG detector returns slightly larger rectangles than the real objects.    // so we slightly shrink the rectangles to get a nicer output.    r.x += cvRound(r.width*0.1);    r.width = cvRound(r.width*0.8);    r.y += cvRound(r.height*0.07);    r.height = cvRound(r.height*0.8);    rectangle(img, r.tl(), r.br(), cv::Scalar(0,255,0), 3);    }    imshow("people detector", img);    int c = waitKey(0) & 255;    if( c == 'q' || c == 'Q' || !f)            break;    }    if(f)        fclose(f);    return 0;}

 

Brief description of program code
1) getdefapeoppeopledetector () obtains the 3780-dimension detection operator (105 blocks with 4 histograms each and 9 bins per histogram there are 3,780 values)
2). CV: hogdescriptor hog; Create class object a series of variable Initialization
Winsize (64,128), blocksize (), blockstride ),
Cellsize (8, 8), nbins (9), derivaperture (1), winsigma (-1 ),
Histogramnormtype (l2hys), l2hysthreshold (0.2), gammacorrection (true)

3) Call the function: detectmultiscale (IMG, found, 0, CV: size (8, 8), CV: size (24, 16), 1.05, 2 );
Parameters include the image to be checked, the returned result list, the threshold value hitthreshold, the window step value winstride, the image padding margin, the proportional coefficient, and the threshold value groupthreshold. You can modify the parameters to find an image, if the parameter 0 is changed to 0.01, it cannot be detected. If the parameter 0 is changed to 0.001, it cannot be detected. If the parameter 1.05 is changed to 1.1, it can be 1.06. If the parameter 2 is changed to 1, it cannot be changed to less than 0.8, or (), (32, 32)

The function content is as follows:
(1) obtain the number of layers levels
For example, if lg (530,402)/lg1.05 = 402/128, the number of layers is 24.
(2) loop levels. The content of each execution is as follows:
Hogthreaddata & tdata = threaddata [getthreadnum ()];
Mat smallerimg (SZ, IMG. Type (), tdata. smallerimgbuf. data );
Call the following core functions
Detect (smallerimg, tdata. Locations, hitthreshold, winstride, padding );
The parameters are: proportional image, returned result list, threshold value, step size, and margin.
The function content is as follows:
(A) Obtain the completed image size paddedimgsize
(B) Create a Class Object hogcache (this, IMG, padding, padding, nwindows = 0, cachestride); During the creation process, first initialize hogcache: init, including: calculate the gradient descriptor-> computegradient, obtain 105 blocks, and 36 parameters for each block.
(C) obtain the number of windows. Take the first layer as an example. The number of windows is (530 + 32*2-64)/8 + 1, (402 + 32*2-128) /8 + 1 = 67*43 = 2881, wherein (32, 32) is the winstride parameter and is also available (24, 16)
(D) execute the following loop in each window:
Execute a loop in 105 blocks. The content of each block is calculated and normalized using the getblock function. The 36 numbers correspond to the corresponding numbers in the operators; judge the sum of the 105 blocks. S> = hitthreshold: The target is detected.
4) the main part is the above, but many details need to be further clarified.

For detailed instructions, see the author's thesis.

2. misunderstandings

Opencv's built-in classifier is trained using samples provided by navneet Dalal and Bill triggs, which is not necessarily suitable for your application. Therefore, for your specific application scenarios, it is necessary to re-train your classifier.

In the previous topic (), we mentioned that SVM training samples can be used to obtain a classifier and can be saved as an XML file.

SVM. train (data_mat, res_mat, MAT (), MAT (), Param); // ☆use training data and determined learning parameters to perform SVM learning ☆☆☆☆svm. save ("E:/Apple/svm_data.xml ");

 

Can these files be directly used to detect targets?

HOGDescriptor hog1;hog1.load("SVM_DATA.xml");hog1.detectMultiScale(img,found);

The answer is clearly no. SVM is trained as a classifier, but hogdescriptor needs a detector, which is essentially different from each other.

The following is a classifier (XML file) trained in the previous topic. The support vector is omitted too much. The meaning of each parameter is not described in detail. It is easy to understand SVM.

<? XML version = "1.0"?> <Opencv_storage> <my_svm type_id = "opencv-ml-SVM"> <svm_type> c_svc </svm_type> <kernel> <type> RBF </type> <gamma> 8.9999999999999997e-002 </gamma> </kernel> <C> 10. </C> <term_criteria> <Epsilon> 1.1920928955078125e-007 </Epsilon> <iterations> 2147483647 </iterations> </term_criteria> <var_all> 1764 </var_all> <var_count 1764 </var_count> <class_count> 2 </class_count> <class_labels type_id = "opencv-matrix"> <rows> 1 </rows> <Cols> 2 </Cols> <DT> I </DT> <DATA> 0 1 </data> </class_labels> <sv_total> 5 </sv_total> <support_vectors> <_> Support Vector omitted </_> </support_vectors> <decision_functions> <_> <sv_count> 5 </sv_count> <ROV>-2.9438933791848948e-001 </ROV> <alpha> 0000- 001 0000- 001 limit -001 3.20.5525729586080e-001-1.455448609731020.e + 000 </alpha> <index> 0 1 2 3 4 </index> </_> </decision_functions> </my_svm> </opencv_storage>

 

 

Detector is just a vector and can be directly converted by classifier.

 

3. Solution

This article introduces how to train samples and how to use classifier to obtain detector.

Refer:

Opencv2.0 eagledetect Learning Experience: http://www.opencv.org.cn/forum/viewtopic.php? F = 1 & t = 9146

 

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.