Understanding object motion consists of two parts: recognition and modeling.
Identify the objects of interest in the Zhennan customs of a previous frame in the subsequent frame of the Video Stream
Search for corners
Tracking feature points are called corner points. intuitively, corner points (rather than edges) A vertex that contains sufficient information and can be extracted from the current frame and the next frame.
The Harris corner points are located in the place where the auto-correlation matrix of the second derivative of the image has two largest feature values, which essentially indicates that there are at least two textures in different directions around this point, just as the actual corner points are generated when at least two edges intersect at one point
Cvgoodfeaturestotrack uses the methods proposed by Shi and Tomasi to calculate the second derivative first. When calculating the feature value, it returns a series of points (Corner Points array) that meet the definition of easy tracking)
Void cvgoodfeaturestotrack (
Const cvarrr * image, 8 is or 32-bit single-channel image
Cvarr * eigimage. Each element contains the minimum feature value of the corresponding vertex in the input image.
Cvarr * tempimage, Temporary Variable
Cvpoint2d32f * corners, output
Int * corner_count, the maximum number of corner points that can be returned
Double quality_level, which is considered to be the acceptable minimum feature value of the corner point and should not exceed 1
Double min_distance, remove the angle points that are closer to each other.
Const cvarr * mask = NULL,
Int block_size = 3,
Int use_harris = 0,
Double K = 0.4
);
Sub-pixel corner points
Cvfindcornersubpix is used to locate the corner position of sub-pixel precision.
In actual calculation of sub-pixel-level corner points, a system of equations with a dot product expression of 0 is solved. Each equation is generated by a point in the Q neighborhood.
The center of the search window is the corner of the integer coordinate value, and the specified pixels of the window size are extended from the center point in each direction.
Unchanged features
Sift detects the main gradient direction at one point, and records the partial gradient histogram result based on this direction, with rotation immutability.
Optical Flow
Each pixel in the image can be associated with the speed, or equivalent, and is associated with the displacement of the pixel between two consecutive frames. The result is dense optical flow. Each pixel is associated with the speed.
For sparse optical flow computing, You need to specify a set of vertices before being tracked.
Lucas-kanade Method
The LK algorithm only requires partial information of small windows around each point of interest, so it can be applied to sparse content.
Insufficient-a large sports meeting moves the point out of this small window
Solution-pyramid, tracking image pyramid allows small window to capture large Motion
Algorithm principle
1. Constant brightness: the brightness of the tracked part in the scenario remains unchanged.
2. Continuous Time or motion is "small motion"-the motion is slow relative to the frame rate.
3. consistent space-adjacent stores are kept adjacent
Calculate the optical flow at the highest level of the image pyramid, and use the obtained motion estimation result as the starting point of the next Pyramid. repeat this process until it reaches the bottom layer of the pyramid, in this way, the possibility of not meeting the motion hypothesis is minimized.
Cvcalcopticalflowlk -- Non-pyramid LK dense Optical Flow Algorithm
Cvcalcopticalflowpyrlk -- LK code of the pyramid
The calculation of the image pyramid is large, and the next frame of the calculated image is used as the initial frame of the next computed image pair.
# Include <cv. h> # include <cxcore. h> # include
Dense Tracking Method
Constant brightness hypothesis, smooth speed constraint (obtained by regularization of the second derivative of the optical flow velocity component)
Like the LK algorithm, the hornschunck method also needs iteration to solve the differential equation.
Cvcalcopticalflowhs
Mean-shift
A stable method for finding the local extreme values in the density distribution of a set of data
Mean-shift is equivalent to Convolution of continuous distribution with the mean-shift kernel, and then applying the climbing algorithm.
Cvmeanshift
Reverse projection chart-probability density chart, which replaces the pixel value with the bin value of the histogram at a certain position of the input image
The camshift search window will resize itself
Cvcamshift
Motion template
It can be applied to gesture recognition.
The motion template needs to know the outline of the object.
Motion history Image
Cvupdatemotionhistory is the function used to build a motion template in opencv.
Once the motion template records the contour of an object at different times, you can calculate the gradient of the motion template image to obtain the global motion information.
Cvcalcmotiongradient calculates the gradient (the input has the allowed minimum and maximum gradient values)
Cvcalcglobalorientation calculates the effective Gradient Direction vector sum to obtain the global motion direction
Cvsegmentmotion: segmentation and calculation of local motion
Estimator
Prediction stage-use the information obtained from the past to further revise the model to get the next position that will appear
Correction phase -- get a measurement and adjust it based on the expected value of the previous measurement
Kalman Filter
If there is a set of strong and reasonable assumptions that give historical system measurements, you can establish a system state model that maximizes the posterior probability of these previous measurements.
Suppose: 1. the system to be modeled is linear; 2. the noise that affects measurement is white noise; 3. The noise is essentially Gaussian distribution.
Well, this is a lot of content. You should take a look at it.