Target detection and pre-background separation from background differencing to deep learning methods

Source: Internet
Author: User
from:http://blog.csdn.net/u010402786/article/details/50596263 Prerequisites

Moving target detection is an important subject in the field of computer image processing and image comprehension, and it is widely used in the fields of robot navigation, intelligent monitoring, medical image analysis, video image coding and transmission.
—————————-——————— – classification of target detection methods

First, a priori knowledge of the known target. In this case, there are two kinds of methods to detect the target, the first kind of method is to train a bunch of weak classifiers with the prior knowledge of the target, and then these weak classifiers vote together to detect the target, such as boosting, random forest is the idea, the familiar adaboost face detection is the same. The second kind of method is to find the best dividing line of target and non-target based on prior knowledge, such as SVM. These two kinds of methods each become one, the strengths, have a good performance.

Second, a priori knowledge of the unknown target. At this point we do not know what the target is, so what is the target has a different definition. One approach is to detect significant targets in the scene, such as expressing the significant probability of each pixel in the scene with some features, and then finding a significant target. Another way is to detect moving objects in the scene. Classical target detection method

1. Background Difference method
When detecting moving target, if the background is still, it uses the current image to make difference with the pre-stored background image, and then uses the threshold value to detect a dynamic target recognition technique of moving region.
The background difference algorithm is suitable for the background known situation, but the difficulty is how to obtain the long-term static background model automatically.
In MATLAB, the simple background difference is directly the function Imabsdiff (x, y) can be.
2. Frame Difference method
The target detection and extraction is carried out by using the difference of two or several frames in the video sequence. In the process of motion detection, the method uses time information to obtain the gray difference of the corresponding pixel by comparing some successive frames in the image, if all is greater than a certain threshold T2, it can be judged that there is a moving target in this position.
More suitable for dynamic changing scenarios.
3. Optical Flow Field method
The change of two-dimensional image is evaluated by using the gray-level preserving principle of corresponding pixels in two adjacent frames. It can be used to detect the relative moving target in the course of camera motion, and even some moving objects in the moving house are detected from the background.
The problem of opening problem and the non-uniqueness of solution of the optical flow field constrained equation. The actual playground is not properly represented.
Examples are as follows:
1. First randomly and evenly select K points within a frame image, and filter out those points whose neighborhood texture is too smooth, because these points are not conducive to the calculation of optical flow.

2. Calculate the amount of light vectoring between these points and the previous frame, as shown on the right, and you can see the approximate direction of the background movement.

3. The next step in this approach varies from person to person.
2007 CVPR An article detection and segmentation of moving objects in highly dynamic scenes method is to put these light flow points (x,y,dx,dy,y,u,v) 7 features are aggregated by meanshift clustering to form a moving target contour. new target detection method

Actually wrote here to think about whether or not can be called target detection, Bo Master thought the image of the former background separation is also a target detection of a kind (Bo Master Caishuxueqian, seeking enlighten)

1. Pixel operation
Each pixel is manipulated to distinguish between the foreground and the background. As shown in the following image:
  
2. Low Rank matrix application
Background modeling is the separation of background and foreground from the captured video. The following example is the separation of the background from the foreground. The method used is the RPCA method.
Its web site and its effects are as follows:
Http://perception.csl.illinois.edu/matrix-rank/introduction.html

3. Deep Learning
FCN + DENSECRF Precise segmentation + semantic tags. In the image of the foreground target detection segmentation is done well, the following can also make semantic detection, to determine what the picture belongs to. This demo was based on our ICCV paper:conditional the Random fields as recurrent neural Networks,
The test URL and the test image are as follows:
Http://www.robots.ox.ac.uk/~szheng/crfasrnndemo
  

A further article on neural network improvement methods is recommended:
http://blog.csdn.net/u010402786/article/details/49272757
Also attached is one of the future development trends of deep learning:
-"Attention model" is warming up
The future development, the attention model heats up. Some systems, but not all, start in the context of the "attention model", or let the neural network try to learn where to place its "attention" in the process of completing the task. These are not part of a regular neural network pipeline, but they have been present in the model from time to times.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.