This article is for the graduation design written by the pupil accurate examination procedures, declined any form of reprint.
This blog is the author of the first two blog " QT and OpenCV - based camera (local image) Read and output program " and " based on OpenCV and QT Human Face (human eye) detection program based on the development. The main principle is: to detect the human eye area image, using edge detection and Hough transform to achieve accurate pupil detection.
First, an image processing class is established to process each frame image.
Class imgprocess{ private: mat inimg;//input image Mat outimg;//output mat Leye; Mat Reye; Mat Leye_g; Mat Reye_g; Cvrect Drawing_box; Public: vector<vec3f> lcircles; Vector<vec3f> Rcircles; Imgprocess (Mat image): inimg (image), Drawing_box (cvrect (0, 0,,)) {} void Eyedetect ();//human eye detection Mat Outputimg ();//output void Divideeye ();//Sub-eye Mat outleye ();//output result mat Outreye (); Mat Edgedetect (Mat &edgeimg);//edge detection void Eyeedge ();//detect left and right eye vector<vec3f> Hough (Mat & Midimage);//hough transform void Findcenter ();//positioning Center Mat PLOTC (vector<vec3f> circles,mat &midimage);// To draw the test result of Hough Transform}; #endif//Imgprocess_h
1. Human eye Detection
In the image processing, we first use the human eye detection function obtained from the previous blog to detect the human eye area. The function voideyedetect () to realize human eye detection;
void Imgprocess::eyedetect () { detectanddisplay (inimg,drawing_box); outimg=inimg;}
The Detectanddisplay function is:
void detectanddisplay (Mat &frame,cvrect &box) {string face_cascade_name = "Haar Cascade_mcs_eyepair_big.xml ";//import a sample that has been trained Cascadeclassifier face_cascade;//set up a classifier string window_name =" Camera "; if (!face_cascade.load (face_cascade_name)) {printf ("[ERROR] no cascade\n"); } std::vector<rect> faces;//is used to save the detection results of the vector Mat Frame_gray; Cvtcolor (frame, frame_gray, cv_bgr2gray);//Convert to Grayscale graph equalizehist (Frame_gray, Frame_gray);//Histogram value Face_cascade.det Ectmultiscale (Frame_gray, Faces, 1.1, 2, 0| Cv_haar_scale_image, Size (30, 30));//The function used to detect the human eye//Draw box for (int i = 0; i < faces.size (); i++) {Point Cente RA (faces[i].x, FACES[I].Y); Point Centerb (faces[i].x + faces[i].width, faces[i].y + faces[i].height); Rectangle (Frame,centera,centerb,scalar (255,0,0)); Box=faces[0]; }//imshow (Window_name, frame);}
After running the perfect eye detection function, the test results will be displayed on the label, one. It should be noted that the detection function of the OPENCV has a false test or missing cases, that is, the detection of a plurality of human eye area (more than the eyebrows part of the human eye detection), or undetectable human eyes. So the function puts the first detected vector into the drawing_box , and if it is not detected, it is not assigned to a value. It is also important to note that, in the case of wearing glasses, the detection effect is not very obvious.
Figure A
2. left and right eye image segmentation
drawing_box is a rectangular data structure with a total of four quantities, drawing_box.x are the x and Y coordinates of the rectangle box, drawing_box.width The height is the width and height of the rectangle box, respectively. After detection of the human eye area, the void Divideeye () function is used to divide the human eye area into two areas of the left and right eye, which facilitates subsequent calculation. Where Leye_box represents the left eye of the rectangle, leye represents the left eye image, Reye_box represents the right eye of the rectangle, Reye represents the image of the right eye.
void imgprocess::D ivideeye () { if (drawing_box.width>0) { cvrect leye_box; leye_box.x=drawing_box.x+1; leye_box.y=drawing_box.y+1; leye_box.height=drawing_box.height-1; Leye_box.width=floor (DRAWING_BOX.WIDTH/2)-1; Cvrect Reye_box; Reye_box.x=leye_box.x+leye_box.width; reye_box.y=drawing_box.y+1; reye_box.height=drawing_box.height-1; reye_box.width=leye_box.width-1; Leye=inimg (Leye_box); Reye=inimg (Reye_box);// Imshow ("L", leye);// Imshow ("R", Reye);} }
3. Image Edge Detection
This paper uses the canny algorithm to detect the edge of the image, the principle of the canny algorithm is not introduced in detail, we can consult the information on the Internet, this paper directly using the OPENCV with the detection function to detect. Edge detection for a single image imgprocess:: edgedetect: First, the image is converted from a color graph to a grayscale image, then Gaussian smoothing, followed by histogram averaging, and finally using the canny function for edge detection. The use of these functions and parameter settings are described on the web, where the canny function is the most important is the low threshold and high threshold settings, the parameters of this article is the result of multiple tests. The setting of parameters has a great relationship with illumination and background, the parameters of this paper apply to the author's environment (background of figure one), whether it is applicable in other environment, and the reader should explore by themselves.
Mat Imgprocess::edgedetect (Mat &edgeimg) { mat edgeout; Cvtcolor (Edgeimg,edgeimg,cv_bgr2gray)///color graph converted to grayscale Gaussianblur (edgeimg,edgeimg, Size (9, 9), 2, 2);//Gaussian smoothing Equalizehist (edgeimg, edgeimg);//Histogram value Canny (edgeimg,edgeout,100,200,3);//input image, output image, low threshold, high threshold value, OpenCV recommended is 3 times times the low threshold value, internal sobel filter size return edgeout;}
Then we set up a function to call the edge detection function on the left and right eye images.
void Imgprocess::eyeedge () { leye_g=edgedetect (leye); Reye_g=edgedetect (Reye); Imshow ("L", leye_g); Imshow ("R", Reye_g);}
Test result two Shows
Figure II
4.Hough Transform Detection Center
After obtaining the edge image of the human eye area, we can use the Hough transform to find the center of the pupil, and the basic principle of the Hough transformation is not introduced here, and the detection function is directly called OPENCV.
Vector<vec3f> Imgprocess::hough (Mat &midimage) { vector<vec3f> circles; Houghcircles (Midimage, Circles, cv_hough_gradient,1.5, 5, +, DRAWING_BOX.HEIGHT/4, DRAWING_BOX.HEIGHT/3); return circles;}
Http://www.tuicool.com/articles/Mn2EBn This article has detailed Hough transformation of the principle and function of the use of methods, the following is the reference to the function parameters in this article.
void Span style= "COLOR: #333333" >houghcircles (Inputarray image,outputarray circles, int method, Double dp, double mindist, double param1= 100 double param2= 100 int minradius= 0 int maxradius= 0 )
first argument, inputarray image 8 inputarray circles , called houghcircles 3 (x, y, radius)
third argument, int type method opencv Span style= "COLOR: #333333" >cv_hough_gradient
fourth parameter, double Span style= "COLOR: #333333" > type DP dp= 1 dp=2
fifth parameter, double Span style= "COLOR: #333333" > type mindist
sixth parameter, double Span style= "COLOR: #333333" > type param1 with default values 100 method The corresponding parameter of the detection method set. The only way to present the Hough gradient method cv_hough_gradient canny The high threshold of the edge detection operator, and the low threshold is half of it.
seventh parameter, double Span style= "COLOR: #333333" > type param2 100 method The corresponding parameter of the detection method set. The only way to present the Hough gradient method cv_hough_gradient , It represents the accumulator threshold at the center of the detection phase. The smaller it is, the more " "
· The eighth parameter, minradius of type int , has a default value of 0, which represents the minimum value of the circle radius.
· the nineth parameter,the maxradius of type int , also has a default value of 0, Represents the maximum value of the circle radius.
After the author's own adjustment of the parameters, it is found that the sixth and seventh parameters are the most influential to the test results, and the author has written the adjusted parameters in the function.
The Houghcircles function returns a three-dimensional vector that holds the x, y coordinates of the detected circle and the radius r of the circle, and then creates a function that draws the circle to represent the result of the test:
Mat imgprocess::P LOTC (vector<vec3f> Circles,mat &midimage) {for (size_t i = 0; i < circles.size (); i++) {Point Center (cvround (Circles[i][0]), Cvround (circles[i][1])); int radius = Cvround (circles[i][2]); cout<<i<< ":" <<circles[i][0]<< "," <<circles[i][1]<< "," <<circles[i][2 ]<<endl; Draw the center Cvround to round Circle (midimage, center, 1, Scalar (255,0,0), -1,8); Draw Circle Contour Circle (midimage, center, RADIUS, Scalar (255,0,0), 1,8); } return midimage; }
Finally, a function for Hough transformation of right and left eye is established to invoke the above two functions:
Voidimgprocess::findcenter () { lcircles=hough (leye_g); Rcircles=hough (reye_g); LEYE=PLOTC (Lcircles,leye); REYE=PLOTC (Rcircles,reye);}
The final test results are shown in three. From the results can be found that the detection results are not very stable, there are missing and false detection, which may be the problem of parameter setting. Using the parameters set in this article, the pupil can be accurately detected, the pupil can be detected by adjusting the position of the face relative to the camera, but the test result is not necessarily continuous. In the case of false detection, even if there are redundant test results, but the correct results are also present (three right), the correct relative to the results of the false test, is the most stable.
Might
In the previous " QT and OpenCV -based camera (local image) Read and output program " The location of the image processing function, the above function is called, You can get the results obtained from Figure four.
Imgprocess Pro (frame);//create Video Processing Class Pro. Eyedetect ();//human eye detection Mat Image=pro. Outputimg ();//output detection image Imshow ("camera", image); Qimage img=mat2qimage (image);//Convert the mat format to qimage format ui->label->setpixmap (Qpixmap::fromimage (IMG));// Displays the result //ui->label->setscaledcontents (TRUE) on the label and//makes the image size match the label size Pro. Divideeye ();//divided into the left and right eye Pro. Eyeedge ();//pupil edge Detection Pro. Findcenter ();//hough transform to find the center Mat Mleye=pro. Outleye ();//output pupil localization results qimage qleye=mat2qimage (mleye); Ui->label_2->setpixmap (Qpixmap::fromimage (Qleye)); Ui->label_2->setscaledcontents (true); Mat Mreye=pro. Outreye (); Qimage qreye=mat2qimage (mreye); Ui->label_3->setpixmap (Qpixmap::fromimage (Qreye)); Ui->label_3->setscaledcontents (TRUE);
Figure Four
This blog introduces the application of OPENCV's image processing function, and the next article, " pupil localization and tracking based on QT and OpenCV." Program will introduce the detection of the detected data processing, to achieve the pupil positioning and tracking function. The entire image processing class function and the calling function of the class are given in the next article.
The procedure of pupil precise detection based on OPENCV and QT