1 -- Back Projection
CamShift is a motion tracking algorithm called "Continuously Apative Mean-Shift. It is used to track the color information of moving objects in video images. I divide this algorithm into three parts for ease of understanding:
1) Back Projection Calculation
2) Mean Shift Algorithm
3) CamShift Algorithm
First, we will discuss Back Projection, and then we will continue to discuss the next two algorithms.
Back Projection
The steps for calculating Back Projection are as follows:
1. Calculate the color histogram of the tracked target. In various color spaces, only the H component in the HSI space (or a color space similar to the HSI space) can represent color information. Therefore, in the specific calculation process, the values of other color spaces are first converted to the HSI space, and then the H component is used for 1D histogram calculation.
2. Convert the original image into a color Probability Distribution Image Based on the obtained color histogram. This process is called "Back Projection ".
The histogram function in OpenCV contains the Back Projection function. The function prototype is:
Void cvCalcBackProject (IplImage ** img, CvArr ** backproject, const CvHistogram * hist );
There are three parameters passed to this function:
1. IplImage ** img: stores the original image and inputs it.
2. CvArr ** backproject: stores the Back Projection result and outputs it.
3. CvHistogram * hist: stores the histogram and inputs
The following is the OpenCV code for calculating Back Projection.
1. Prepare an image that only contains the target to be tracked, convert the color space to the HSI space, and obtain the H component:
IplImage * target = cvLoadImage ("target.bmp",-1); // load the image
IplImage * target_hsv = cvCreateImage (cvGetSize (target), IPL_DEPTH_8U, 3 );
IplImage * target_hue = cvCreateImage (cvGetSize (target), IPL_DEPTH_8U, 3 );
CvCvtColor (target, target_hsv, CV_BGR2HSV); // convert to HSV space
CvSplit (target_hsv, target_hue, NULL); // obtain the H component
2. Calculate the histogram of H component, I .e. 1D histogram:
IplImage * h_plane = cvCreateImage (cvGetSize (target_hsv), IPL_DEPTH_8U, 1 );
Int hist_size [] = {255}; // quantize the value of H to [0,255]
Float * ranges [] ={ {0,360}; // The value range of the H component is [0,360)
CvHistogram * hist = cvCreateHist (1, hist_size, ranges, 1 );
CvCalcHist (& target_hue, hist, 0, NULL );
Here we need to consider the value range of the H component. The value range of the H component is [0,360). The value of this value range cannot be expressed by a byte. In order to be expressed by a byte, the H value needs to be quantified appropriately. Here we can quantify the range of H to [0,255].
4. Calculate Back Projection:
IplImage * rawImage;
//----------------------------------------------
// Get from video frame, unsigned byte, one channel
//----------------------------------------------
IplImage * result = cvCreateImage (cvGetSize (rawImage), IPL_DEPTH_8U, 1 );
CvCalcBackProject (& rawImage, result, hist );
5. result: The result is what we need.
2 -- Mean Shift Algorithm
Here comes the CamShift Algorithm,OpenCV implementation 2This time, we will focus on the Mean Shift algorithm.
Before discussing the Mean Shift algorithm, we first discuss how to calculate the Center of gravity (Mass Center) of a region in a 2D probability distribution image. The Center of gravity can be calculated using the following formula:
1. Calculate the moment of order 0 in the region
For (int I = 0; I For (int j = 0; j <width; j ++)
M00 + = I (I, j)
2. Moment of level 1 in the region:
For (int I = 0; I For (int j = 0; j <width; j ++)
{
M10 + = I * I (I, j );
M01 + = j * I (I, j );
}
3. The Mass Center is:
Xc = M10/M00; Yc = M01/M00
Next, we will discuss the specific steps of the Mean Shift algorithm. The Mean Shift algorithm can be divided into the following four steps:
1. Select the window size and initial position.
2. Calculate the Mass Center in the window.
3. Adjust the Center of the window to the Mass Center.
4. Repeat 2 and 3 until the window center is "converged", that is, the distance between each window movement is smaller than a certain threshold.
In OpenCV, the Mean Shift algorithm function is provided. The prototype of the function is:
Int cvMeanShift (IplImage * imgprob, CvRect win,
CvTermCriteria criteria, CvConnectedComp * out );
The required parameters are:
1. IplImage * imgprob: 2D probability distribution image, passed in;
2. CvRect win: initial window, passed in;
3. CvTermCriteria criteria: criteria for stopping iteration, passed in;
4. CvConnectedComp * out: the query result is output.
(Note: To construct a CvTermCriteria variable, three parameters are required. One is type, the other is the maximum number of iterations, and the last one represents a specific threshold. For example, you can construct criteria: criteria = cvTermCriteria (CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 10, 0.1 ).)
Returned parameters:
1. int: number of iterations.
3 -- CamShift Algorithm
1. Principle
After learning about the MeanShift algorithm, we extend the MeanShift algorithm to the continuous image sequence (generally the video image sequence), thus forming the CamShift algorithm. The CamShift algorithm is called "Continuously apapaptive Mean-SHIFT". Its basic idea is to perform the MeanShift operation on all frames of a video image, the result of the previous frame (that is, the center and size of the Search Window) is used as the initial value of the Search Window of the next frame of the MeanShift algorithm. In this way, the target tracking can be realized through iteration. The specific steps of the entire algorithm are divided into five steps:
Step 1: set the entire image as the search area.
Step 2: the size and position of the Search Window.
Step 3: Calculate the color probability distribution in the Search Window. The area size is slightly larger than that in the Search Window.
Step 4: Run MeanShift. Obtain the new position and size of the Search Window.
Step 5: In the next video image, use the value obtained in Step 3 to initialize the position and size of the Search Window. Jump to Step 3 to continue running.
2. Implementation
In OpenCV, there is a function that implements the CamShift algorithm. The prototype of this function is:
CvCamShift (IplImage * imgprob, CvRect win,
CvTermCriteria criteria,
CvConnectedComp * out, CvBox2D * box = 0 );
Where:
Imgprob: Color probability distribution image.
Repeated win: the initial value of Search Window.
Criteria: a criterion used to determine whether the search is stopped.
Out: Save the calculation result, including the location and area of the new Search Window.
Box: contains the smallest rectangle of the tracked object.
Note:
1. In the directory of OpenCV 4.0 beta, there is an example of CamShift. Unfortunately, the target tracking in this example is semi-automated, that is, you need to manually select a target. I am trying to implement fully-automated target tracking. I hope you can communicate with us in this regard.
5.
Source code for tracking and detection of moving objects (CAMSHIFT algorithm)
From http://blog.csdn.net/hunnish/archive/2004/09/07/97049.aspx
C/C ++ source code for fast tracking and detection of moving targets using the CAMSHIFT algorithm. This example is provided in opencv beta 4.0 in its SAMPLE. The algorithm is described as follows ):
This application demonstrates a fast, simple color tracking algorithm that can be used to track faces, hands. the CAMSHIFT algorithm is a modification of the Meanshift algorithm which is a robust statistical method of finding the mode (top) of a bability distribution. both CAMSHIFT and Meanshift algorithms exist in the library. while it is a very fast and simple method of tracking, because CAMSHIFT tracks the center and size of the probability distribution of an object, it is only as good as the probability distribution that you produce for the object. typically the probability distribution is derived from color via a histogram, although it cocould be produced from correlation, recognition scores or bolstered by frame differencing or motion detection schemes, or joint probabilities of different colors/motions etc.
In this application, we use only the most simplistic approach: A 1-D Hue histogram is sampled from the object in an HSV color space version of the image. to produce the probability image to track, histogram "back projection" (we replace image pixels by their histogram hue value) is used.
For details about the algorithm, see the following:
Http://www.assuredigit.com/incoming/camshift.pdf
For usage of OPENCV B4.0 library and related questions, refer to the following articles:
Http://forum.assuredigit.com/display_topic_threads.asp? ForumID = 11 & topic id = 3471
Download running files:
Http://www.assuredigit.com/product_tech/Demo_Download_files/camshiftdemo.exe
The running file is compiled in the VC6.0 environment. It is a stand-alone running program and does not require support from the DLL library of OPENCV. Before running, connect the USB camera. You can then select the target to be tracked with the mouse.
=====
# Ifdef _ CH _
# Pragma package
# Endif
# Ifndef _ EiC
# Include "cv. h"
# Include "highgui. h"
# Include
# Include
# Endif
IplImage * image = 0, * hsv = 0, * hue = 0, * mask = 0, * backproject = 0, * histimg = 0;
CvHistogram * hist = 0;
Int backproject_mode = 0;
Int select_object = 0;
Int track_object = 0;
Int show_hist = 1;
CvPoint origin;
CvRect selection;
CvRect track_window;
CvBox2D track_box; // The box returned by tracking, with an angle
CvConnectedComp track_comp;
Int hdims = 48; // Number of HIST partitions, the higher the accuracy
Float hranges_arr [] ={ 0,180 };
Float * hranges = hranges_arr;
Int vmin = 10, vmax = 256, smin = 30;
Void on_mouse (int event, int x, int y, int flags)
{
If (! Image)
Return;
If (image-> origin)
Y = image-> height-y;
If (select_object)
{
Selection. x = MIN (x, origin. x );
Selection. y = MIN (y, origin. y );
Selection. width = selection. x + CV_IABS (x-origin. x );
Selection. height = selection. y + CV_IABS (y-origin. y );
Selection. x = MAX (selection. x, 0 );
Selection. y = MAX (selection. y, 0 );
Selection. width = MIN (selection. width, image-> width );
Selection. height = MIN (selection. height, image-> height );
Selection. width-= selection. x;
Selection. height-= selection. y;
}
Switch (event)
{
Case CV_EVENT_LBUTTONDOWN:
Origin = cvPoint (x, y );
Selection = cvRect (x, y, 0, 0 );
Select_object = 1;
Break;
Case CV_EVENT_LBUTTONUP:
Select_object = 0;
If (selection. width> 0 & selection. height> 0)
Track_object =-1;
# Ifdef _ DEBUG
Printf ("/n # select area of the mouse :");
Printf ("/n X = % d, Y = % d, Width = % d, Height = % d ",
Selection. x, selection. y, selection. width, selection. height );
# Endif
Break;
}
}
CvScalar HSV 2rgb (float hue)
{
Int rgb [3], p, sector;
Static const int sector_data [] [3] =
{, 1}, {, 0}, {, 2}, {, 1}, {, 0}, {, 2 }};
Hue * = 0.033333333333333333333333333333333f;
Sector = cvFloor (hue );
P = cvRound (255 * (hue-sector ));
P ^ = sector & 1? 255: 0;
Rgb [sector_data [sector] [0] = 255;
Rgb [sector_data [sector] [1] = 0;
Rgb [sector_data [sector] [2] = p;
# Ifdef _ DEBUG
Printf ("/n # Convert HSV to RGB :");
Printf ("/n HUE = % f", hue );
Printf ("/n R = % d, G = % d, B = % d", rgb [0], rgb [1], rgb [2]);
# Endif
Return cvScalar (rgb [2], rgb [1], rgb [0], 0 );
}
Int main (int argc, char ** argv)
{
CvCapture * capture = 0;
IplImage * frame = 0;
If (argc = 1 | (argc = 2 & strlen (argv [1]) = 1 & isdigit (argv [1] [0])
Capture = cvCaptureFromCAM (argc = 2? Argv [1] [0]-'0': 0 );
Else if (argc = 2)
Capture = cvCaptureFromAVI (argv [1]);
If (! Capture)
{
Fprintf (stderr, "cocould not initialize capturing.../n ");
Return-1;
}
Printf ("Hot keys:/n"
"/TESC-quit the program/n"
"/Tc-stop the tracking/n"
"/Tb-switch to/from backprojection view/n"
"/Th-show/hide object histogram/n"
"To initialize tracking, select the object with mouse/n ");
// CvNamedWindow ("Histogram", 1 );
CvNamedWindow ("CamShiftDemo", 1 );
CvSetMouseCallback ("CamShiftDemo", on_mouse); // on_mouse Custom Event
CvCreateTrackbar ("Vmin", "CamShiftDemo", & vmin, 256, 0 );
CvCreateTrackbar ("Vmax", "CamShiftDemo", & vmax, 256, 0 );
CvCreateTrackbar ("Smin", "CamShiftDemo", & smin, 256, 0 );
For (;;)
{
Int I, bin_w, c;
Frame = cvQueryFrame (capture );
If (! Frame)
Break;
If (! Image)
{
Image = cvCreateImage (cvGetSize (frame), 8, 3 );
Image-> origin = frame-> origin;
Hsv = cvCreateImage (cvGetSize (frame), 8, 3 );
Hue = cvCreateImage (cvGetSize (frame), 8, 1 );
Mask = cvCreateImage (cvGetSize (frame), 8, 1 );
Backproject = cvCreateImage (cvGetSize (frame), 8, 1 );
Hist = cvCreateHist (1, & hdims, CV_HIST_ARRAY, & hranges, 1); // calculates the Histogram
Histimg = cvCreateImage (cvSize (320,200), 8, 3 );
CvZero (histimg );
}
CvCopy (frame, image, 0 );
CvCvtColor (image, hsv, CV_BGR2HSV); // color space conversion BGR to HSV
If (track_object)
{
Int _ vmin = vmin, _ vmax = vmax;
CvInRangeS (hsv, cvScalar (0, smin, MIN (_ vmin, _ vmax), 0 ),
CvScalar (180,256, MAX (_ vmin, _ vmax), 0), mask); // obtain the binary MASK.
CvSplit (hsv, hue, 0, 0, 0); // extract only the HUE component
If (track_object <0)
{
Float max_val = 0.f;
CvSetImageROI (hue, selection); // obtain the selected region for ROI
CvSetImageROI (mask, selection); // obtain the selected region for mask
CvCalcHist (& hue, hist, 0, mask); // calculates the histogram.
CvGetMinMaxHistValue (hist, 0, & max_val, 0, 0); // you can specify the maximum value only.
CvConvertScale (hist-> bins, hist-> bins, max_val? 255./max_val: 0., 0); // scale the bin to the interval [0,255]
CvResetImageROI (hue); // remove ROI
CvResetImageROI (mask );
Track_window = selection;
Track_object = 1;
CvZero (histimg );
Bin_w = histimg-> width/hdims; // hdims: number of entries, then bin_w is the width
// Draw a Histogram
For (I = 0; I {
Int val = cvRound (cvGetReal1D (hist-> bins, I) * histimg-> height/255 );
CvScalar color = HSV 2rgb (I * 180.f/ hdims );
CvRectangle (histimg, cvPoint (I * bin_w, histimg-> height ),
CvPoint (I + 1) * bin_w, histimg-> height-val ),
Color,-1, 8, 0 );
}
}
CvCalcBackProject (& hue, backproject, hist); // use the back project Method
CvAnd (backproject, mask, backproject, 0 );
// Calling CAMSHIFT Algorithm Module
CvCamShift (backproject, track_window,
CvTermCriteria (CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 10, 1 ),
& Track_comp, & track_box );
Track_window = track_comp.rect;
If (backproject_mode)
CvCvtColor (backproject, image, CV_GRAY2BGR); // use the backproject grayscale image
If (image-> origin)
Track_box.angle =-track_box.angle;
CvEllipseBox (image, track_box, CV_RGB (255, 0), 3, CV_AA, 0 );
}
If (select_object & selection. width> 0 & selection. height> 0)
{
CvSetImageROI (image, selection );
CvXorS (image, cvScalarAll (255), image, 0 );
CvResetImageROI (image );
}
CvShowImage ("CamShiftDemo", image );
CvShowImage ("Histogram", histimg );
C = cvWaitKey (10 );
If (c = 27)
Break; // exit from for-loop
Switch (c)
{
Case 'B ':
Backproject_mode ^ = 1;
Break;
Case 'C ':
Track_object = 0;
CvZero (histimg );
Break;
Case 'H ':
Show_hist ^ = 1;
If (! Show_hist)
CvDestroyWindow ("Histogram ");
Else
CvNamedWindow ("Histogram", 1 );
Break;
Default:
;
}
}
CvReleaseCapture (& capture );
CvDestroyWindow ("CamShiftDemo ");
Return 0;
}
# Ifdef _ EiC
Main (1, "camshiftdemo. c ");
# Endif