viola-jones human eye detection algorithm +meanshift tracking algorithm
This time the code is the video of the human eye part of the detection and tracking, testing using MATLAB comes with the human eye Detection Toolbox
Here are some things that the MATLAB website describes this algorithm:
Viola-jones is a very common algorithm used in human eye and face detection, and the characteristic is haar-like feature, classifier is Cascade adaboost classifier ;
Viola Jones face Detector is a facial detection framework jointly presented by Paul Viola and Michael J Jones. It greatly improves the speed and accuracy of human face detection.
- Speed improvement: Using integral images to extract image eigenvalues, so very fast. At the same time, the characteristics of the adaboost classifier are used to preserve the most useful features, which also reduces the complexity of the operation in the detection.
- Accuracy improvement aspect: The AdaBoost classifier is transformed into Cascade AdaBoost classifier, which improves the accuracy of face detection (reduce the missed rate and the rate of false detection).
The specific introduction of this algorithm can be referred to the link:
Then, the tracking aspect is the most common meanshift tracking algorithm
meanshift, meanshift fukunage 1975 meanshift algorithm, Generally refers to an iterative step, that is, the current point of the offset mean, and then as a new starting point, continue to move until a certain end condition is met.
The meanshift algorithm is a kernel density estimation method, which does not require any prior knowledge and relies entirely on the calculation of the density function values of sample points in the feature space. For a set of sampled data, the histogram method usually divides the value of the data into several equal intervals, the data is divided into several groups according to the interval, the ratio of the number of each set of data to General staff is the probability of each unit; The kernel density estimation method is similar to the Histogram method, but it has a kernel function for smoothing data. By using the kernel function estimation method, we can gradually converge to any density function in the case of sufficient sampling, that is, it is possible to estimate the density of the data subjected to any distribution. The essence of this algorithm, a picture can be explained, attention, that is, mean drift, very image.
Further explanations of the images and Meanshift algorithms can be found in the links:
Paste the following code:
CLC; clear All;close ALL;CLF reset;%%%%%%%%%%%%%%%%%%%%%%%%%%--------Human eye detection section begins---------------------%%%%%%%%%%%%%%%%%%%%%%Videoobj= Videoreader ('Eye.mp4');%Read Video file Nframes=Get(Videoobj,'Numberofframes');%get the number of video file frames img= Read (Videoobj,1);%Read the 1th frame for face detection eye_detect= Vision. Cascadeobjectdetector ('Eyepairbig');% use viola-Jones algorithm, human Face detection toolbox in the human eye Section eyes=step (EYE_DETECT,IMG);The human eye is detected from the first photo, and the function returns the position of the human eye, hold On;imshow (IMG);%Show First picture fori =1: Size (Eyes,1) Rectangle ('Position', Eyes (i,:),'linewidth',1,'LineStyle','-','Edgecolor','R');%The position of the human eye above, with a rectangular frame, shows the Endtitle ('Eyes Detection');%Label Eye Detection hold Off;pause (0.000001)%pause for a little while, take a break and come back right now. Rect=eyes (1,:);%Save Rect detected location temp=img (Rect (2): Rect (2) +rect (4), Rect (1): Rect (1) +rect (3),:);%extract the image of the detected eye part [A,b,c]=size (temp); %returns the dimensions of a picture%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% compute the weight matrix of the target image%%%%%%%%%%%%%%%%%%%%%%%Y (1) =a/2; y (2) =b/2; tic_x=rect (1) +rect (3)/2; tic_y=rect (2) +rect (4)/2; M_wei=zeros (A, b);%weight Matrix H=y (1)^2+y (2)^2;%Bandwidth forI=1: A forj=1: B Dist= (I-y (1))^2+ (J-y (2))^2; M_wei (I,J)=1-dist/h; %Epanechnikov profile kernel function Endendc=1/sum (sum (M_wei));Normalization Factor%calculate target weights histogram qu%hist1=c*wei_hist (TEMP,M_WEI,A,B);Target Modelhist1=zeros (1,4096); forI=1: A forj=1: b%rgb color space quantization to 16* -* -Bins Q_r=fix (Double(Temp (I,J,1))/ -); %fix for approaching 0 rounding function, red Q_g=fix (Double(Temp (I,J,2))/ -); %Green Q_b=fix (Double(Temp (I,J,3))/ -); %Blue Q_temp=q_r* the+q_g* -+q_b; %set the percentage of red, green, and blue components per pixel hist1 (q_temp+1) = Hist1 (q_temp+1) +m_wei (I,J); %calculates the weight of each pixel in the histogram statistic Endendhist1=hist1*C;rect (3) =ceil (Rect (3); Rect (4) =ceil (Rect (4));%%%%%%%%%%%%%%%%%%%%%%%%%%% Reading sequence Image%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% forL=1: Nframes Im= Read (Videoobj, l);%reads the I frame Num=0; Y=[2,2]; %%%%%%%mean shift Iteration while((Y (1)^2+y (2)^2>0.5) &num< -) %Iteration Condition Num=num+1; Temp1=Imcrop (Im,rect); %calculate candidate Area histogram%hist2=c*wei_hist (TEMP1,M_WEI,A,B);Target candidates PU hist2=zeros (1,4096); forI=1: A forj=1: b q_r=fix (Double(Temp1 (I,j,1))/ -); Q_g=fix (Double(Temp1 (I,j,2))/ -); Q_b=fix (Double(Temp1 (I,j,3))/ -); Q_TEMP1 (I,J)=q_r* the+q_g* -+Q_b; Hist2 (Q_TEMP1 (i,j)+1) = Hist2 (Q_TEMP1 (i,j) +1)+M_wei (I,J); End End Hist2=hist2*b; %figure (2); %subplot (1,2,1); %plot (HIST2); %Hold on ; W=zeros (1,4096); forI=1:4096 if(Hist2 (i) ~=0) %Not equal to W (i)=sqrt (Hist1 (i)/Hist2 (i)); ElseW (i)=0; End End%initialization of variables Sum_w=0; XW=[0,0]; forI=1: A; forj=1: b sum_w=sum_w+w (UInt32 (Q_TEMP1 (i,j)) +1); XW=xw+w (UInt32 (Q_TEMP1 (i,j)) +1) *[i-y (1)-0.5, J-y (2)-0.5]; End End Y=xw/Sum_w; %Center Point Location Update rect (1) =rect (1) +y (2); Rect (2) =rect (2) +y (1); End%%% Tracking Track Matrix%%%tic_x=[tic_x;rect (1) +rect (3)/2]; Tic_y=[tic_y;rect (2) +rect (4)/2]; V1=rect (1); V2=rect (2); V3=rect (3); V4=rect (4); Percent%%% Display trace results%%%%subplot (1,2,2); Imshow (Uint8 (Im)); Title ('target tracking result and its motion trajectory'); Hold on; Plot ([V1,v1+V3],[V2,V2],[V1,V1],[V2,V2+V4],[V1,V1+V3],[V2+V4,V2+V4],[V1+V3,V1+V3],[V2,V2+V4],'linewidth',2,'Color','R'); Plot (tic_x,tic_y,'linewidth',2,'Color','b'); Hold off; Pause (0.000001) End
Finally stole a MATLAB official website
The human eye detection +meanshift tracking algorithm in MATLAB Toolbox--Human eye tracking