Opencv learning notes (9) Optical Flow Method

Source: Internet
Author: User
Tags map vector scale image flag vector

Original article, post Please note: http://blog.csdn.net/crzy_sparrow/article/details/7407604


Directory:

1. General Method of Target Tracking Based on Feature Points

Ii. Optical Flow Method

Iii. Optical Flow functions in opencv

Iv. Object Tracking Method Based on Optical Flow Method with class Encapsulation

5. complete code

Vi. References

1. General Method of Target Tracking Based on Feature Points

Feature point-based tracking algorithms can be roughly divided into two steps:

1) detect the feature points of the current frame;

2) Compare the gray scale of the current frame with the next frame to estimate the position of the feature point of the current frame in the next frame;

3) filter feature points with unchanged positions, and the remaining points are targets.

Obviously, the Target Tracking Algorithm Based on Feature Points is related to two steps: 1) and 2. Feature points can be Harris points (see my other blog post) or edge points. There are also many methods to estimate the next frame position, for example, the optical flow method that we want to talk about here can also be the Kalman filter method (we are in the control system and often encounter this in class, so we can think of it when we look at the optical flow method ).

In this article, the improved Harris corner points are used to extract feature points (see my other blog post: http://blog.csdn.net/crzy_sparrow/article/details/7391511) and the Lucas-kanade optical flow method is used to achieve target tracking.


Ii. Optical Flow Method

In this part, the optical flow part of chapter 4 of learing opencv is very detailed. We recommend that you read the book. I also paste some content from the book here.

In addition, I have attached some personal opinions to this part (I still hope to correct the mistakes ):

1. The first assumption is:

(1) The brightness is constant, that is, the brightness does not change as time changes. This is the assumption of the basic optical flow method (All Optical Flow Method variants must satisfy) and is used to obtain the basic equation of the optical flow method;

(2) For small motion, this must also be met, that is, changes in time will not cause sharp changes in the position, so that the gray scale can be used to evaluate the deviation of the position (in other words, in the case of small motion, we can use gray-scale changes caused by changes in the unit position between the front and back frames to approximate the partial derivative of the gray-scale pair position), which is also an indispensable assumption of the optical flow method;

(3) Spatial consistency. neighboring points in a scene are projected onto the image as adjacent points, and the adjacent points are at the same speed. This is some assumption of Lucas-kanade optical flow method. Because there is only one basic equation constraint in the optical flow method, but the speed in the X and Y directions is required, there are two unknown variables. We assume that similar motion is performed in the feature point neighborhood, and then we can establish n multiple equations to obtain the velocity in the X and Y directions (n is the total number of points in the feature point neighborhood, including this feature point ).

2. Solving Equations

When multiple equations are used to obtain two unknown variables and linear equations, it is easy to use the least square method. In fact, opencv does the same. The sum of the least squares of errors is the optimal indicator.

3. Well, we have mentioned the assumption of small sports. You must be very upset about being smart. If the target speed is very fast, the goods will not be lost. Fortunately, the problem can be solved at multiple scales. First, create a Gaussian pyramid for each frame. The largest scale image is on the top and the original image is on the bottom. Then, estimate the position of the next frame from the top layer. As the initial position of the next layer, search down the pyramid and repeat the estimation until it reaches the bottom layer of the pyramid. If you are smart, you must have discovered that this search not only solves the problems of large moving target tracking, the aperture problem can also be solved to some extent (windows of the same size can cover as many corner points as possible on large-scale images, but these corner points cannot be overwritten on the original image ).




Iii. Optical Flow functions in opencv

Opencv2.3.1 has implemented the feature point location estimation function based on the optical flow method (the current frame location is known, and the gray level of the front and back frames is known). The introduction is as follows (from opencv2.3.1 reference manual ):

calcOpticalFlowPyrLKCalculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.void calcOpticalFlowPyrLK(InputArray prevImg, InputArray nextImg, InputArray prevPts,InputOutputArray nextPts, OutputArray status, OutputArray err,Size winSize=Size(15,15), int maxLevel=3, TermCriteria crite-ria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01),double derivLambda=0.5, int flags=0 )ParametersprevImg – First 8-bit single-channel or 3-channel input image.nextImg – Second input image of the same size and the same type as prevImg .prevPts – Vector of 2D points for which the flow needs to be found. The point coordinatesmust be single-precision floating-point numbers.nextPts – Output vector of 2D points (with single-precision floating-point coordinates)containing the calculated new positions of input features in the second image. WhenOPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in theinput.status – Output status vector. Each element of the vector is set to 1 if the flow for thecorresponding features has been found. Otherwise, it is set to 0.err – Output vector that contains the difference between patches around the original andmoved points.winSize – Size of the search window at each pyramid level.maxLevel – 0-based maximal pyramid level number. If set to 0, pyramids are not used(single level). If set to 1, two levels are used, and so on.criteria – Parameter specifying the termination criteria of the iterative search algorithm(after the specified maximum number of iterations criteria.maxCount or when the searchwindow moves by less than criteria.epsilon .derivLambda – Not used.flags – Operation flags:– OPTFLOW_USE_INITIAL_FLOW Use initial estimations stored in nextPts . If theflag is not set, then prevPts is copied to nextPts and is considered as the initial estimate.

Iv. Object Tracking Method Based on Optical Flow Method with class Encapsulation

To put it bluntly, add code, including feature extraction, tracking feature points, and marking feature points.

// Frame processing base class frameprocessor {public: Virtual void process (MAT & input, mat & ouput) = 0 ;}; // feature tracking class, inherit from frame processing base class featuretracker: Public frameprocessor {mat gray; // current grayscale mat gray_prev; // previous grayscale map vector <point2f> points [2]; // The feature point vector <point2f> initial; // The initial feature point vector <point2f> features; // The detected feature int max_count; // The maximum number of features to be tracked, double qlevel; // The double mindist of the feature detection indicator; // The minimum tolerable distance between feature points, vector <uchar> status; // The flag vector <float> err when the feature points are successfully tracked; // the small area error of the feature points during tracking and public: featuretracker (): max_count (500 ), qlevel (0.01), mindist (10 .) {} void process (MAT & frame, mat & output) {// obtain the grayscale image cvtcolor (frame, gray, cv_bgr2gray); frame. copyto (output); // there are too few feature points. redetect the feature point if (addnewpoint () {detectfeaturepoint (); // Insert the detected feature points [0]. insert (points [0]. end (), features. begin (), features. end (); initial. insert (initial. end (), features. begin (), features. end ();} // the first frame if (gray_prev.empty () {gray. copyto (gray_prev);} // estimate the position of the feature point of the previous frame based on the grayscale image of the previous two frames. // The default window is 15*15 calcopticalflowpyrlk (gray_prev, // The gray level of the previous frame, // the gray level of the current frame points [0], // The points [1] position of the feature point of the previous frame, // The status of the feature point of the current frame, // The identifier (ERR) of a feature point that has been successfully tracked. // the difference between a feature point and a feature point in the previous frame, point int K = 0 with sharp motion changes can be deleted based on the difference size; // remove those unmoved feature points for (INT I = 0; I <points [1]. size (); I ++) {If (accepttrackedpoint (I) {initial [k] = initial [I]; points [1] [k ++] = points [1] [I] ;}} points [1]. resize (k); initial. resize (k); // flag the handletrackedpoint (frame, output) of the tracked feature. // initialize the feature set and the grayscale image STD :: swap (points [1], points [0]); CV: swap (gray_prev, gray);} void detectfeaturepoint () {goodfeaturestotrack (Gray, // enter the image features, // output feature point max_count, // maximum number of feature points qlevel, // quality indicator mindist); // minimum tolerable DISTANCE} bool addnewpoint () {// if the number of feature points is less than 10, it is decided to add the feature point return points [0]. size () <= 10;} // If the feature point moves between the first and second frames, the point is regarded as the target point and can be tracked by bool accepttrackedpoint (int I) {return status [I] & (ABS (points [0] [I]. x-points [1] [I]. x) + ABS (points [0] [I]. y-points [1] [I]. y)> 2);} // draw the feature point void handletrackedpoint (MAT & frame, mat & output) {for (INT I = 0; I <points [I]. size (); I ++) {// The current feature point uses a straight line to represent line (output, initial [I], points [1] [I], scalar :: all (0); // circle the current position (output, points [1] [I], 3, scalar: All (0 ), (-1 ));}}};


5. complete code
There are more than 300 lines of complete Running code, and there are too many lines of code sticking up. Please download them from the resources I uploaded.

: Http://download.csdn.net/detail/crzy_sparrow/4183674

Running result:



Vi. References

[1] The classic article by B. lucas and T. kanade, an iterative Image Registration Technique with an application to stereo vision in Int. joint Conference in artificial intelligence, pp. 674-679,1981, that describes the original
Feature Point Tracking Algorithm.
[2] The article by J. shi and C. tomasi, good features to track in IEEE Conference on computer vision and pattern recognition, pp. 593-600,199 4, that describes an improved version of the original feature point tracking algorithm.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.