Object Tracking Based on particle filter and Object Tracking Based on Particle Filter

Source: Internet
Author: User

Object Tracking Based on particle filter and Object Tracking Based on Particle Filter

First:



Rob Hess (http://web.engr.oregonstate.edu /~ Hess/) to implement this particle filter.

Starting with the code, we can understand the principles of particle filter.

According to the introduction of particle filter on Wikipedia (http://en.wikipedia.org/wiki/Particle_filter), particle filter actually has many variants, this implementation of Rob Hess should be the most basic one, Sampling Importance Resampling (SIR ), re-Sampling Based on importance.

Rough understanding of algorithm principles:


1) initialization stage-extract tracking target features


In this phase, you need to manually specify the tracking target, and the program calculates the characteristics of the Tracking target. For example, you can use the color features of the target. Specific to Rob Hess code, you need to manually drag a tracking area at the beginning, and then the program automatically calculates the histogram of the Hue space in the area, that is, the target feature. The Histogram can be expressed by a vector, so the target feature is a vector V of N * 1.

2) search stage-like a hound looking for a target
We have mastered the characteristics of the target, and then we will search for the target object. Here we release many Particle particles, that is, we will release many puppies to search for the target.
Then there are multiple ways for particles: a) uniform placement: that is, particles are evenly distributed across the image plane (uniform distribution); B) in the last frame, the target is placed near the target according to the Gaussian distribution. It can be understood that more places are placed near the target, and less places are placed away from the target. Rob Hess's Code uses the latter method. How can a dog search for a target after it is released? It is the target feature (color histogram, vector V) obtained in the initialization phase ). Each dog calculates the color feature of the image at its position, obtains a color histogram and vector Vi, and computes the similarity between the histogram and the target histogram. Similarity has multiple measurements. The simplest one is to calculate sum (abs (Vi-V )). each dog calculates the similarity and then normalize it again, so that the similarity obtained by all dogs is equal to 1.

3) decision-making stage


A clever dog we put out reported to us, "the similarity between the image of No. 1 dog and the target is 0.3", and "the similarity between the image of No. 2 dog and the target is 0.02 ", "the similarity between the image at DOG 3 and the target is 0.0003," and "the similarity between the image at dog N and the target is 0.013 "... so where is the goal most likely? Let's make a weighted average. Set the pixel coordinate of the Image of dog N to (Xn, Yn). The reported similarity is Wn, so the most likely pixel coordinate of the target X = sum (Xn * Wn ), Y = sum (Yn * Wn ).

4) Resampling

In general, the target is always moving back and forth. It is not static or chaotic. Where can the target be in a new image? Let's search for a dog. But how should we put a dog? Let's review the report from the dogs. "The similarity between the image of No. 1 dog and the target is 0.3", "the similarity between the image of No. 2 dog and the target is 0.02", and "the similarity between the image and the target of No. 3 dog is 0.0003 ", "the similarity between the image and the target of dog N is 0.013 "... according to the reports of all dogs, the similarity between dogs 1 and 3 is the highest, and the similarity between dogs 3 is the lowest. Therefore, we need to re-distribute police forces, we put more dogs in the dog with the highest similarity, less dogs in the dog with the lowest similarity, and even recall the original dog. This is Sampling Importance Resampling, based on the Importance of heavy Sampling (more important to repeat the dog ).

(2)-> (3)-> (4)-> (2) if the loop is repeated, the dynamic tracking of the target is completed.


The core idea of particle filter is random sampling + important sampling. Since I don't know where the target is, I should randomly scatter the particles. After scattering the particles, the importance of each particle is calculated based on the feature similarity. Then, the particles are scattered in important places, and less scattered in unimportant places. Therefore, compared with Monte Carlo filtering, particle filtering requires less computation. This idea coincides with the RANSAC algorithm. The idea of RANSAC is also (for example, it is used in the simplest linear fitting ).

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.