Opencv Moving Target Detection-Frame Difference and Gaussian Mixture Model Method

Source: Internet
Author: User
I. Simple inter-Frame Difference Method the frame difference method uses pixel-based time difference and Fujian to extract motion regions in two or three adjacent frames in a continuous image sequence..
Code:
int _tmain(int argc, _TCHAR* argv[]){VideoCapture capture("bike.avi");if(!capture.isOpened())return -1;double rate = capture.get(CV_CAP_PROP_FPS);int delay = 1000/rate;Mat framePro,frame,dframe;bool flag = false;namedWindow("image",CV_WINDOW_AUTOSIZE);namedWindow("test",CV_WINDOW_AUTOSIZE);while(capture.read(frame)){if(false == flag){framePro = frame.clone();flag = true;}else{absdiff(frame,framePro,dframe);framePro = frame.clone();threshold(dframe,dframe,80,255,CV_THRESH_BINARY);imshow("image",frame);imshow("test",dframe);waitKey(delay);}}return 0;}
Effect:
We can see the disadvantages of the frame difference method. When the moving target is fast, the detected area will widen. The speed in the figure is not too fast, but there are still two overlapping figures. Ii. Conversion from background difference method (gaussian background modeling): An application in Gini motion detection

Principle: Gaussian Model precisely quantifies things using Gaussian probability density functions (normal distribution curves) and decomposes a thing into several Gaussian probability density functions (normal distribution curves) model.
The principle and process of establishing a Gaussian Model for the image Background: the gray-scale histogram of the image reflects the occurrence frequency of a gray-scale value in the image. It can also be considered as an estimation of the gray-scale probability density of the image. If the target area of the image is larger than that of the background area, and the background area and the target area are different in gray scale, the grayscale histogram of the image shows a double peak-valley shape, one peak corresponds to the target, and the other peak corresponds to the center gray level of the background. Complex Images, especially medical images, are usually multipeaks. The multi-peak feature of the histogram is considered as the superposition of multiple Gaussian distributions, which can solve the problem of image segmentation.

In the intelligent monitoring system, detection of moving objects is the central content. In the detection and extraction of moving objects, background objects are crucial for target recognition and tracking. Modeling is an important part of background object extraction.

First, we need to mention the concept of background and foreground. Foreground refers to any meaningful moving object that is foreground when the background is static. The basic idea of modeling is to extract the foreground from the current frame, so that the background is closer to the background of the current video frame. That is to say, the current background frame in the current frame and video sequence is used for weighted average to update the background. However, due to sudden changes in light and the impact of other external environments, the general background after modeling is not very clean and clear, gaussian mixture model is one of the most successful modeling methods.

The Gaussian mixture model uses K (three to five) Gaussian models to characterize the features of each pixel in the image. After the new image is obtained, the Gaussian mixture model is updated, match each pixel in the current image with the Gaussian mixture model. If the model is successful, the point is regarded as the background point. Otherwise, the point is the foreground point. The entire Gaussian Model is determined by two parameters: variance and mean. Different learning mechanisms are adopted for learning mean and variance, which will directly affect the stability, accuracy and convergence of the model. Because we are modeling the background of the moving target, we need to update the variance and mean parameters in the Gaussian Model in real time. To improve the learning ability of the model, the improved method uses different learning rates for mean and variance updates. To improve the detection performance of large and slow moving targets in busy scenarios, the concept of weighted mean is introduced to create and update background images in real time, and then combine the weights, weights, and background images to classify the foreground and background.

So far, the modeling of the Gaussian mixture model is basically complete. I will summarize the process. First, initialize several pre-defined Gaussian models and initialize parameters in the Gaussian Model, and find the parameters to be used later. Second, process each pixel in each frame to see if it matches a model. If it matches, it is classified into the model, the model is updated based on the new pixel value. If the model does not match, a Gaussian model is created based on the pixel. The initialization parameters represent the most impossible model in the original model. Finally, we chose the most likely models as the background model to pave the way for background object extraction.

 

Methods: currently, Motion Object detection is divided into two types: fixed cameras and motion cameras. The most famous solution for the detection of moving objects in a camera is the optical flow method. By solving the optical flow field of the image sequence obtained by the partial differential equation, we can predict the motion of the camera. We can also use the optical stream method when the camera is fixed. However, due to the complexity of the optical flow method, it is often difficult to calculate the camera in real time, so we use the gaussian background model. Because, when the camera is fixed, the background changes slowly, and most of the changes are caused by illumination and wind. Through modeling the background, for a given image to separate foreground and background, in general, foreground is a moving object, so as to achieve the purpose of moving object detection.
Single distribution gaussian background model
The single-distribution gaussian background model assumes that for a background image, the brightness distribution of specific pixels satisfies the Gaussian distribution, that is, the brightness of the background image B and (x, y) points is satisfied:
IB (X, Y )~ N (u, d)
In this way, each pixel attribute of our background model includes two parameters: Average U and variance D.
For a given image G, if exp (-(IG (x, y)-u (x, y) ^ 2/(2 * d ^ 2)> T, think (X, Y) is the background point, the opposite is the foreground.
At the same time, as time changes, the background image will also change slowly. At this time, we need to constantly update the parameters of each pixel point.
U (t + 1, x, y) = A * u (t, x, y) + (1-A) * I (x, y)
Here, a is called an update parameter, indicating the speed at which the background changes. Generally, we do not update D (we found that the effect does not change much in the experiment ).

Code: (opencv2)
Int main (INT argc, char ** argv) {videocapture cam ("bike. Avi"); // 0 open the default camera if (! Cam. isopened () Return-1; namedwindow ("Mask", cv_window_autosize); namedwindow ("frame", cv_window_autosize); MAT frame, mask, threimage, output; int delay= 1000/CAM. get (cv_cap_prop_fps); backgroundsubtractormog bgsubtractor (10, 10, 0.5, false); // construct the Gaussian mixture model parameter 1: Number of historical frames used 2: Number of Gaussian mixture, 3: background Ratio 4: Noise weight while (true) {cam> frame; imshow ("frame", frame); bgsubtractor (frame, mask, 0.001 ); imshow ("Mask", mask); waitkey (Delay);} return 0 ;}
Effect:
Compared with the inter-frame difference method, the detected moving target does not have any extra areas, which is more consistent with the target itself.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.