Target Tracking-CamShift and tracking-camshift
Reprinted, please specify the source !!!Http://blog.csdn.net/zhonghuan1992
Target Tracking-CamShift
CamShift stands for ContinuouslyAdaptive Mean Shift, which is a continuous adaptive MeanShift algorithm. The MeanShift algorithm must first have a preliminary understanding of the MeanShift algorithm. For more information, see here. CamShift is based on MeanShift and adjusts the center position and window size of the next frame based on the results of the previous frame. Therefore, when the tracked target changes in the video, make certain adjustments to this change.
The camShift algorithm in the examples of OpenCV can be divided into three parts: (reference from here http://blog.csdn.net/carson2005/article/details/7439125)
1. Calculate the color projection chart (reverse projection ):
(1) to reduce the effect of illumination changes on target tracking, first convert the image from RGB color space to HSV color space;
(2) perform histogram statistics on the H component. The histogram represents the probability of different H component values, or you can find the probability or number of pixels when the H component size is x, that is, the color probability search table is obtained;
(3) Replace the values of each pixel in the image with the probability of its color appearance to obtain the color probability distribution chart;
The preceding three steps are called reverse projection. Note that the color probability distribution chart is a gray image;
Ii. meanShift Optimization
As mentioned above, the meanShift algorithm (http://blog.csdn.net/carson2005/article/details/7337432) is a non-parameter probability density estimation method, which obtains the location and size of the optimal search window through continuous iteration calculation.
Iii. camShift Tracking Algorithm
As mentioned above, camShift uses meanShift in each frame of the video sequence, and uses the meanShift result of the previous frame as the initial value of the next frame, so that it is continuously iterated, you can achieve the target tracking.
The camShift function is self-contained in openCV. I will take a look at the implementation and some explanations are provided in the Code. (Comments from http://www.cnblogs.com/tornadomeet/archive/2012/03/15/2398769.html)
# Include "opencv2/video/tracking. hpp "# include" opencv2/imgproc. hpp "# include" opencv2/highgui. hpp "# include <iostream> # include <ctype. h> using namespace cv; using namespace std; Mat image; bool backprojMode = false; // indicates whether to enter the reverse projection mode. ture indicates that you are going to enter the reverse projection mode bool selectObject = false; // indicates whether to select the initial target to be tracked. true indicates that int trackObject = 0 is being selected with the mouse; // indicates the number of Tracked targets. bool showHist = true; // whether to display the histogram Point origin; // used to save the mouse Selection Select the position of the first click point Rect selection; // used to save the Rectangular Box int vmin = 10, vmax = 256, smin = 30; void onMouse (int event, int x, int y, int, void *) {if (selectObject) // It is valid only when the left mouse button is pressed, then, you can use the code in if to determine the selected rectangular area selection {selection. x = MIN (x, origin. x); // vertex coordinate selection in the upper left corner of the rectangle. y = MIN (y, origin. y); selection. width = std: abs (x-origin. x); // rectangular Width selection. height = std: abs (y-origin. y); // rectangular high selection & = Rect (0, 0, image. co Ls, image. rows); // used to ensure that the selected rectangular area is within the image range} switch (event) {case CV_EVENT_LBUTTONDOWN: origin = Point (x, y); selection = Rect (x, y, 0, 0); // when the mouse is pressed, a rectangular area selectObject = true; break; case CV_EVENT_LBUTTONUP: selectObject = false; if (selection. width> 0 & selection. height> 0) trackObject =-1; break ;}} void help () {cout <"\ nThis is ademo that shows mean-shift based tracking \ n" "You select acolor Objects such as your face and it tracks it. \ n "" This readsfrom video camera (0 by default, or the camera number the user enters \ n "" Usage: \ n "". /camshiftdemo [camera number] \ n "; cout <" \ n \ nHot keys: \ n "" \ tESC-quitthe program \ n "" \ tc-stop thetracking \ n "\ tb-switchto/from backprojection view \ n" "\ th-show/hide object histogram \ n "" \ tp-pausevideo \ n "" To initializetracking, select the obje Ct with mouse \ n ";} const char * keys = {" {1 | 0 | camera number} "}; int main (int argc, const char ** argv) {help (); VideoCapture cap; // define a Class Object captured by a camera Rect trackWindow; RotatedRect trackBox; // define a rotating matrix Class Object int hsize = 16; float hranges [] = {0,180}; // The following histogram function uses const float * phranges = hranges; CommandLineParser parser (argc, argv, keys ); // command parser function int camNum = parser. get <int> ("0"); cap. ope N (camNum); // directly call the member function to open the camera if (! Cap. isOpened () {help (); cout <"*** cocould notinitialize capturing... * ** \ n "; cout <" Currentparameter's value: \ n "; parser. printParams (); return-1;} namedWindow ("Histogram", 0); namedWindow ("CamShiftDemo", 0); setMouseCallback ("CamShiftDemo", onMouse, 0 ); // The message response mechanism createTrackbar ("Vmin", "CamShiftDemo", & vmin, 256, 0); // The createTrackbar function is used to create a slide bar and a slide bar Vmin in the corresponding window, vmin indicates the value of the slider. The maximum value is 256 createTrackbar ("Vmax", "CamShiftDemo", & vmax, 256, 0); // The last parameter 0 indicates that the createTrackbar ("Smin", "CamShift Demo ", & smin, 10,256, 0); // The initial values of vmin, vmax, and smin are 200,320, 30 Mat frame, hsv, hue, mask, hist, histimg = Mat: zeros, CV_8UC3), backproj; bool paused = false; for (;) {if (! Paused) // No pause {cap> frame; // capture an image from the camera and output it to the frame if (frame. empty () break;} frame. copyTo (image); if (! Paused) // do not press the rotate key {cvtColor (image, hsv, CV_BGR2HSV); // convert the rgb camera frame to the if (trackObject) of the hsv space // trackObject is initialized to 0, or press the 'C' key on the keyboard and the value is 0. After the mouse is clicked, it is-1 {int _ vmin = vmin, _ vmax = vmax; // The inRange function is used to check whether the size of each element in the input array is between two given values. There can be multiple channels. The mask saves the minimum value of the 0 channel, that is, the h component // The Three hsv channels are used here to compare h, 0 ~ 180, s, smin ~ 256, v, min (vmin, vmax), max (vmin, vmax ). If all three channels are in the corresponding range, the value of the vertex corresponding to the // mask is 1 (0xff); otherwise, the value is 0 (0x00 ). inRange (hsv, Scalar (0, smin, MIN (_ vmin, _ vmax), Scalar (180,256, MAX (_ vmin, _ vmax), mask ); int ch [] = {0, 0}; hue. create (hsv. size (), hsv. depth (); // hue is initialized to a matrix of the same size and depth as hsv. The tone measurement is expressed by angle, with a 120 degree difference between red, green, and blue colors, reversed color difference 180 degrees mixChannels (& hsv, 1, & hue, 1, ch, 1); // copy the number of the first hsv channel (that is, the color tone) to hue, 0 Index Array if (trackObject <0) // After the selected area is released, this function internally assigns the value 1 {// here, the constructor roi uses the Mat hue matrix header, and The Data Pointer of the roi points to the hue, that is, the same data is shared. select indicates the areas of interest Mat roi (hue, selection) and maskroi (mask, selection ); // The minimum value of the hsv saved by the mask // The first parameter of the calcHist () function is the input matrix sequence. The first parameter indicates the number of input matrices, the first parameter indicates the list of channels in which the histogram dimension will be calculated. The second parameter indicates an optional mask function. // The second parameter indicates the output histogram. The second parameter indicates the histogram dimension, 7th parameters are the size of each one-dimensional histogram array, and 8th parameters are the boundary calcHist (& roi, 1, 0, maskroi, hist, 1, & hsize, & phranges); // calculate the histogram of the 0-channel roi and put it into the hist through the mask. The hsize is the size of each one-dimensional histogram normalize (hist, hist, 0,255, CV_MINMAX ); // convert the hist moment Array range normalization, all normalized to 0 ~ 255 trackWindow = selection; trackObject = 1; // trackObject remains 1 as long as the area is released after the mouse is selected, and no keyboard clears the 'C' key, therefore, the if function can only be executed once, unless you re-select the tracking region histimg = Scalar: all (0); // It is the same as pressing the 'C' key, here all (0) indicates that all scalar values are 0 int binW = histimg. cols/hsize; // histing is a 200*300 matrix. The hsize should be the width of each bin, that is, the histing matrix can split several bins out of Mat buf (1, hsize, CV_8UC3); // defines a buffer single bin matrix for (int I = 0; I
Experiment results:
The target is my face in the camera. When he moves, he can be tracked.
We can see the density of various colors in my face.
Automatic Tracking of camshift or mishift Moving Targets
Camshift: Tips for automated tracking opencv provides a color-based tracking algorithm, camshift, which is a good algorithm. However, it is a semi-automatic algorithm that requires you to set the tracking target on the tracking interface. How can we set a target in advance and follow up?
I stole a lazy, added an image to set the tracking target, loaded the image at startup, and generated the histogram required for the tracking. This achieves automated tracking. The procedure is as follows:
1. find the object you want to track and take a photo of the camera (print screen). Note that due to light changes, the camera will be brighter, the distance will be darker, so the shooting color of the best tracking object is similar to that of the Actual tracking object. 2. Enable the paint brush function and create an image (320*240). The image size is the same as that of the video source image, and the object is captured and enlarged. In this way, the color area of the tracking is in the image. 3. Add void loadTemplateImage () to the code ()
{
IplImage * tempimage = cvLoadImage ("F:/OM_tracking/Test cam shift/ShadowTrack/Debug/green.bmp", 1 );
CvCvtColor (tempimage, hsv, CV_BGR2HSV );
Int _ vmin = vmin, _ vmax = vmax; cvInRangeS (hsv, cvScalar (0, smin, MIN (_ vmin, _ vmax), 0 ),
CvScalar (180,256, MAX (_ vmin, _ vmax), 0), mask );
CvSplit (hsv, hue, 0, 0, 0 );
Selection. x = 1;
Selection. y = 1;
Selection. width = 320-1;
Selection. height = 240-1; cvSetImageROI (hue, selection );
CvSetImageROI (mask, selection );
CvCalcHist (& hue, hist, 0, mask); float max_val = 0.f;
CvGetMinMaxHistValue (hist, 0, & max_val, 0, 0 );
CvConvertScale (hist-> bins, hist-> bins, max_val? 255./max_val: 0., 0 );
CvResetImageROI (hue );
CvResetImageROI (mask );
Track_window = selection;
Track_object = 1; cvReleaseImage (& tempimage) ;}4, remove the original Hist generation code. Add loadtemplateimage5at startup. Run the code. Check the result. Download the code here.
Target tracing Algorithms
Target Tracking: uses the Region Matching of two adjacent frames to establish a target chain from the image sequence, and tracks the entire process from the monitored range to the departing monitoring range. First, determine the matching criteria. Common image matching methods include the Hausdorff distance range method and image interconnectivity.