Optical flow (optical flow) and openCV implementation, opticalopencv

Source: Internet
Author: User
Tags scalar

Optical flow (optical flow) and openCV implementation, opticalopencv

Reprinted, please specify the source !!!Http://blog.csdn.net/zhonghuan1992

Implementation in optical flow (optical flow) and openCV

Concept of Optical Flow:

It was first proposed by Gibson in 1950.It is the instantaneous speed of Pixel Motion of a spatial moving object on the observation imaging plane, the relationship between the previous frame and the current frame is determined by the change of pixels in the time domain and the correlation between adjacent frames in the image sequence, so as to calculate the motion information of objects between adjacent frames.In general, optical flow is produced by the movement of the foreground target, the motion of the camera, or the joint motion of the two.

When a person's eyes observe a moving object, the object's scene forms a series of continuously changing images on the human's retina, this series of continuously changing information is constantly "Flowing" through the Retina (that is, the image plane), like a "flow" of light, so it is called an optical flow ). Optical Flow expresses the changes in the image. because it contains the object motion information, it can be used by the observer to determine the motion of the object.

The following figure shows the movement of a ball in five consecutive frames. The numbers on the arrow represent different frames. The movement of the red ball forms the optical flow.

 

 

Operation:

Let's show you a series of vertices in a graph. Find the vertices in the other graph that are the same as those in the previous columns.

You can also find the point [ux, uy] T on I1, find the point [ux + delta x, uy + Delta y] T on I2, and minimize ε:

Wx is added to indicate a region. Generally, the point of a region is tracked.

In Graphics applications, tracking points (features) on multiple graphs is a basic operation: finding an object on a graph and observing how the object moves.

Feature point-based tracking algorithms can be roughly divided into two steps:

1) detect the feature points of the current frame;

2) Compare the gray scale of the current frame with the next frame to estimate the position of the feature point of the current frame in the next frame;

3) filter feature points with unchanged positions, and the remaining points are targets.

Feature points can be Harris or edge points.

Consider the light intensity of a pixel in the first frame (a dimension time is added here, we only process the image before, so there is no need for time. Now you need to add this dimension ). It moves the distance to the frame and uses time. Because the pixels are the same, the light intensity does not change (in fact, this light intensity has not changed is the basic assumption of many optical flow algorithms )., So we can say:

Then we can use the Taylor series approximation to expand the following:

Therefore:

The above equation is called an optical flow equation. The partial derivative can be obtained, but u and v are unknown, so the equations cannot be solved. However, there are many ways to solve this problem, one of which is the Lucas-Kanade method.

Lucas-Kanade:

There is such a assumption that all adjacent pixels have similar actions. The Lucas-Kanade method uses a 3*3 area, which assumes that these 9 points have the same action, so now the problem becomes nine equations and two unknown numbers. This problem can certainly be solved. A good solution is to use the least square method.

N = 9, so there are nine equations:




Q1, q2 ,..., Represent the pixel, which is A partial derivative. The preceding equation can be written in the following form: A v = B, where:

Then, get the following:



The final solutions to the two unknowns are:

The above solution is a small and coherent movement. We just assumed that the speed of nine pixels is consistent. In reality, large and coherent motion is widespread. We need large windows to capture motion. However, large windows violate the assumption of motion coherence. The image pyramid can solve this problem. (I will know more about the content of the image pyramid, but I am not afraid to post it in disorder ).

Implementation in OpenCV:

OpenCV provides support for the method described above. The function name is:Cv2.calcOpticalFlowPyrLK ()Now let's track some points in the video. UseCv2.goodFeaturesToTrack (). 

We get the first frame, probe the Shi-Tomasi point, and then we use the Lucas-Kanade Optical Flow Method to complete these points.

# Include "opencv2/video/tracking. hpp "# include" opencv2/imgproc. hpp "# include" opencv2/highgui. hpp "# include <iostream> # include <ctype. h> using namespace cv; using namespace std; static void help () {// print a welcome message, and the OpenCV version cout <"\ nThis is ademo of Lukas-Kanade optical flow lkdemo (), \ n" "Using OpenCVversion" <CV_VERSION <endl; cout <"\ nIt usescamera B Y default, but you can provide a path to video as an argument. \ n "; cout <" \ nHot keys: \ n "" \ tESC-quitthe program \ n "" \ tr-auto-initialize tracking \ n "" \ tc-deleteall the points \ n "" \ tn-switch \ "night \" mode on/off \ n "" To add/removea feature point click it \ n "<endl ;} point2f point; bool addRemovePt = false; static void onMouse (int event, int x, int y, int/* flags */, void */* param */) {if (Event = CV_EVENT_LBUTTONDOWN) {point = Point2f (float) x, (float) y); addRemovePt = true ;}} int main (int argc, char ** argv) {help (); VideoCapture cap; TermCriteria termcrit (CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03); Size subPixWinSize (10, 10), winSize (31, 31 ); const int MAX_COUNT = 500; bool needToInit = false; bool nightMode = false;/* if (argc = 1 | (argc = 2 & strlen (argv [1]) = 1 & is Digit (argv [1] [0]) cap. open (argc = 2? Argv [1] [0]-'0': 0); else if (argc = 2) cap. open (argv [1]); */cap. open ("G :\\ video analysis getting started exercise \ video analysis getting started exercise-appendix \ sample. avi "); if (! Cap. isOpened () {cout <"cocould notinitialize capturing... \ n "; return 0;} namedWindow (" LK ", 1); setMouseCallback (" LK ", onMouse, 0); Mat gray, prevGray, image; vector <Point2f> points [2]; for (;) {Mat frame; cap> frame; if (frame. empty () break; frame. copyTo (image); cvtColor (image, gray, COLOR_BGR2GRAY); if (nightMode) image = Scalar: all (0); if (needToInit) {// automaticinitialization goodFea TuresToTrack (gray, points [1], 100, 0.01, 10, Mat (), 3, 0, 0.04); cornerSubPix (gray, points [1], subPixWinSize, size (-1,-1), termcrit); addRemovePt = false;} else if (! Points [0]. empty () {vector <uchar> status; vector <float> err; if (prevGray. empty () gray. copyTo (prevGray); calcOpticalFlowPyrLK (prevGray, gray, points [0], points [1], status, err, winSize, 3, termcrit, 0, 0.001); size_t I, k; for (I = k = 0; I <points [1]. size (); I ++) {if (addRemovePt) {if (norm (point-points [1] [I]) <= 5) {addRemovePt = false; continue ;}} if (! Status [I]) continue; points [1] [k ++] = points [1] [I]; circle (image, points [1] [I], 3, scalar (0,255, 0),-1, 8);} points [1]. resize (k);} if (addRemovePt & points [1]. size () <(size_t) MAX_COUNT) {vector <Point2f> tmp; tmp. push_back (point); cornerSubPix (gray, tmp, winSize, cvSize (-1,-1), termcrit); points [1]. push_back (tmp [0]); addRemovePt = false;} needToInit = false; imshow ("LK", image); char c = (cha R) waitKey (100); if (c = 27) break; switch (c) {case 'r': needToInit = true; break; case 'C ': points [0]. clear (); points [1]. clear (); break; case 'N': nightMode =! NightMode; break;} std: swap (points [1], points [0]); cv: swap (prevGray, gray);} return 0 ;}


 

Result: Some feature points are obtained at will, and the feature points move as the vehicle moves.

 


In opencv, The calcOpticalFlowPyrLK function calculates the light stream vector.

This is not to calculate the vector, but to predict the location where the next vertex appears. This function will return a series of vertices ......
Curr_features

Take a good look at the routines.

What is optical flow?

The optical flow (optical flow) method is an important method for motion image analysis. It is defined by Gibso. The first proposed in 1950 was the pattern motion speed in a time-varying image. Because when an object is moving, the brightness mode of the corresponding Vertex on the image is also moving. The apparent motion of this image brightness mode is optical flow. Optical Flow expresses the changes in the image. because it contains the object motion information, it can be used by the observer to determine the motion of the object. The definition of optical flow can extend the optical flow field, which is a two-dimensional (2D) Instantaneous Velocity Field Composed of all pixels in the image, the two-dimensional velocity vector is the projection of the three-dimensional velocity vector of visible points in the scene on the imaging surface. Therefore, optical flow not only contains the motion information of the observed object, but also rich information about the 3D structure of the object. The study of optical flow has become an important part in the field of computer vision and related research. In computer vision, optical flow plays an important role in object segmentation, recognition, tracking, robot navigation, and shape information restoration. Restoring the 3D structure and motion of objects from optical flows is one of the most significant and challenging tasks facing the Institute of computer vision. It is precisely because of this important position and role of optical flow that a large number of psychologists, physiology and engineering researchers have joined its Research ranks. Over the past decade, they have proposed many ways to calculate optical flows, and new methods are emerging.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.