Vibe algorithm for moving target detection

Source: Internet
Author: User

I. Introduction to the detection of moving targets

Moving object detection in video this piece of the present method is too much. The algorithm of moving target detection according to the relationship between target and camera can be divided into static background motion detection and motion detection under dynamic background. Start by simply discussing the background type from the video.
The target detection in the static background distinguishes the actual changing region from the background from the sequence image. There are many methods for moving target detection under the background of static, these methods are more focused on the elimination of small noise of background disturbance, such as:
1. Background Difference method
2. Inter-frame Difference method
3. Optical Flow method
4. Mixed Gaussian model (GMM)
5. Codebook (Codebook)
There are also variants of these methods, such as three-frame differential, five-frame differential, or a combination of these methods.
The target detection in the moving background, relative to the static background, the idea of the algorithm is different, the general will focus more on matching, need to carry out the global motion estimation and compensation of the image. Because the goal and the background of the simultaneous movement of the case, it is not easy to judge according to the motion. There are many moving object detection algorithms in the moving background, such as
1. Block matching
2. Optical Flow Estimation
These methods have to be in different circumstances to say that each has its own characteristics, it is not necessarily simple algorithm must be weaker than the complex. Some blogs have compared and evaluated these methods, and interested students can refer to these

(Excerpt from http://blog.csdn.net/zouxy09/article/details/9622401)
Recommend a good library: http://code.google.com/p/bgslibrary/contains a variety of background subtraction method, you can let yourself do a lot less definitely work.
There is also a blog post on the evaluation http://www.cnblogs.com/xrwang/archive/2010/02/21/ForegroundDetection.html

Ii. Introduction to GMM algorithm


Have to mention is one of the GMM algorithm (can refer to Adaptive background mixture models for real-time tracking), this algorithm in my usual use of motion detection when the general will be used directly, There are two reasons, first, the effect is really good, the noise can be quickly eliminated, the second, OPENCV contains the GMM algorithm, can be directly called, very convenient.

In a nutshell, GMM compares the pixel of the input image with the background model, and the point of the background model similarity is regarded as the background, and the low similarity of the background model is regarded as the foreground, and then the moving object is extracted by using the morphological method. The mixed Gaussian model is composed of K (basically 3 to 5) Gaussian model weights. After acquiring a new frame image, if the pixel point in the current image is higher than one of the Pixel's K models, it is considered the background and the current frame's pixels as a new model, updating the existing K model. If the match ratio is low, the foreground point. The whole mixed Gaussian model algorithm is mainly the two parameters of the difference and the mean value, and the different learning mechanisms for these two parameters directly affect the correctness, stability and convergence of the algorithm. Code on the Internet everywhere, such as http://blog.csdn.net/pi9nc/article/details/21717669, the needs of students can go to see.

Three, Vibe algorithm

However, the protagonist I introduced today is not GMM, but the vibe algorithm. On the Internet to see the ViBe algorithm is a very good algorithm, is said to have GMM to PK down, so I read the original vibe:a powerful random technique to estimate the background in video SE Quences.
The VIBE algorithm is a background modeling method proposed by Olivier Barnich and Marc Van Droogenbroeck in 2011. The algorithm uses the neighborhood image to create a background model, which can be subdivided into three steps by using a comparison of the background model and the current input pixel values to detect the foreground:
The first step is to initialize the background model for each pixel in the single-frame image. It is assumed that the pixel values of each pixel and its neighborhood pixels have similar distributions in the airspace. Based on this assumption, each pixel model can be represented by pixels in its neighborhood. In order to ensure that the background model conforms to the statistical law, the neighborhood area is large enough. The background model of the pixel when the first frame image is entered, that is t=0

which represents the adjacent pixel value on the airspace, representing the pixel value of the current point. During the initialization of N times, the number of pixels in the pixel selected is l=1,2,3,..., N.

The second step is to make the foreground target segmentation operation for the subsequent image sequence. when t=k, the background model for the pixel points is, and the pixel value is. Determine whether the pixel value is a foreground as follows.

Here superscript r is randomly selected; t is a pre-set threshold value. When satisfying the background #n times, we think the pixel point is the background, otherwise the foreground.

The third step, the background model Update method. the update of the Vibe algorithm is random in both time and space.
The randomness of time. Randomly extract one of the n background models, set as an image, Figure 2-1 represents the x position of the image and the pixels within its eight neighbors. When we get a new frame image, if the pixel in the X position in the image is judged as the background, it needs to be updated. This extraction process embodies the randomness of time.
The randomness of space. Randomly extracting a pixel in the eight neighborhood, replacing it with a random one, which reflects the randomness of the model's update space.

The above is the update process, which is used to update its eight neighbors. With the eight neighborhood Update method, the ghosting and error caused by the captured video micro-jitter (camera jitter, Target micro-motion) can be removed, which makes the detection target more accurate.

In general, the background does not change significantly, so the number of updates per background model Updatenum should be similar. So we take the first frame background update number of times Initnum as the comparison value, conforms to the following formula to re-initialize the background model, so as to avoid the large area of light changes caused by false positives.

The initial frames in the video are likely to contain targets, and conventional background modeling algorithms often do not eliminate ghost areas quickly, which is detrimental to foreground detection. The VIBE algorithm updates the model with the spatial propagation characteristics of the pixel value, and the background model gradually spreads outwards, which also facilitates the faster identification and elimination of the ghost region. The following is an example of traffic video foreground detection under Vibe algorithm

Shown is the effect of the foreground detection under the vibe algorithm, and the red rectangle represents the more significant ghost areas that appear. Before frame 10th, the ghost area remains serious, and as the model continues to update, the ghost area disappears after the 40th frame, and the ghost area disappears completely. The advantages of vibe algorithm in foreground detection and background model updating are illustrated.

Code Address: http://download.csdn.net/detail/zhuangxiaobin/7360113

Vibe algorithm for moving target detection

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.