Implementation of particle filter (particle filter) matlab

Source: Internet
Author: User
Tags abs cos

The particle filter is based on Bayesian inference and importance sampling as the basic framework. Therefore, to grasp the particle filter, for the above two basic content must have a preliminary understanding. The Bayesian formula is very perfect, but in practical problems, because the variable dimension is very high, the integrand is difficult to integrate, which often brings great trouble to the particle filter. To overcome this problem, it introduces the importance of sampling. The first is to design a density of importance, according to the relationship between the importance density and the actual distribution, the particles are allocated weight. By using the Time-varying Bayesian formula, the updating formula of particle weight and the evolution form of importance density are given. In practical problems, because it is very difficult to sample directly from the importance density, the compromise is made, the importance density is chosen as the state transfer distribution, and the law of the right value updating follows the measurement equation.


Particle filter algorithm originates from Monte Carlo theory, which refers to the probability of an event appearing at the frequency of the event. Therefore, in the process of filtering, we need to use the probability such as P (x), all the variables x sampling, with a large number of samples and their corresponding weights to approximate p (x). Therefore, in the process of filtering, particle filter can deal with the probability of any form, unlike the Kalman filter can only deal with the probability of linear Gaussian distribution. One of the great advantages of particle filtering is this.

Come down and look at the equation of state for any of the following:

X (t) =f (x (T-1), U (t), W (t))

Y (t) =h (x (t), E (t))

where x (t) is the T-time State, U (t) is the control quantity, W (t) and E (t) are respectively state noise and observation noise. The previous equation describes the state transition, and the latter is the observational equation. For such a question how does particle filtering filter out the real state X (t) from observed Y (t), and X (T-1), U (t)?

Prediction phase: Particle filtering first generates a large number of samples based on the probability distribution of x (T-1), which is called particles. So the distribution of these samples in the state space is actually the probability distribution of x (t-1). OK, then, according to the state transition equation plus the control, we can get a prediction particle for each particle.

Calibration phase: After the observed y arrives, the observed equation, conditional probability P (y|xi), is used to evaluate all the particles, and it is straightforward to say that this conditional probability represents the probability of obtaining the observed Y when the real state X (T) takes the first particle XI. The probability of this condition is the weight of the first particle. And so on, with all the particles being evaluated, the more likely you are to get the particle that observes Y, the higher the weight you get.

Resampling algorithm: Remove the lower weights of the particles, copy the Gao Quan value of the particles. What we get is, of course, the real state X (t) that we say we need, and the resampling particles represent the probability distributions of the real state. The next round of filtering, and then the resampling of the particle set into the state transition equation, direct access to the prediction of particles.

Initial state problem: since we don't know anything about X (0), all we can think of is that X (0) is evenly distributed across the state space. The initial sampling is then distributed evenly across the state space. Then all the samples are fed into the state transition equation to obtain the predicted particles. The weights of all the predicted particles are then evaluated, and of course we have only a fraction of the Gao Quan values that can be obtained in the entire state space. At last, the emphasis of the next round filter is reduced to the Gao Quan value particle by resampling and removing the low weight.

Specific process:

1 Initialization phase-extract tracking target feature

This stage is to manually specify the tracking target, and the program calculates the characteristics of the tracking target, such as the color characteristics of the target. Specifically to Rob Hess's code, you start with the mouse to drag out a tracking area, and then the program automatically calculates the area hue (Hue) space histogram, that is the target characteristics. Histograms can be represented by a vector, so the target feature is a n*1 vector v.

2) Search stage-Put the dog

Well, we have mastered the characteristics of the target, the following release a lot of dogs, to search the target, the dog here is the particle particle. Dogs have many ways to put them. For example, a) evenly put: that is, in the entire image plane evenly scattered particles (uniform distribution), in the previous frame of the target near the Gaussian distribution, can be understood as close to the target place more, away from the target place less. Rob Hess's code uses the latter method. After the dog is released, how does each dog search for the target? is the target feature (hue histogram, vector v) that is obtained according to the initialization phase. Each dog calculates the color characteristics of the image at its location, obtains a tonal histogram, a vector VI, and calculates the similarity between the histogram and the target histogram. Similarity has a variety of metrics, the simplest of which is to compute SUM (ABS (VI-V)). Each dog calculates the similarity and then does the normalization again, making all the dogs ' similarity add up to 1.

3) Decision Stage

The smart dogs we put out sent us a report, "the similarity between the image and the target of a dog is 0.3", second dog place image and target similarity is 0.02 "," "Third dog image and target similarity is 0.0003", "N Dog place image and target similarity is 0.013" ... So where is the most likely target? Let's do a second weighted average. The image pixel coordinates of the n number dog are (Xn,yn), it reports the similarity is Wn, then the target most likely pixel coordinates x = SUM (xn*wn), Y = SUM (yn*wn).

4) resampling phase resampling

Since we are doing target tracking, generally speaking, the goal is to run around and move. In a new frame of image, where the target might be. Let's put the dog on the search. But now how to put the dog. Let's revisit the dog's report. "The similarity between the image and the target of a dog is 0.3", second dog place image and target similarity is 0.02 "," "Third dog image and target similarity is 0.0003", "N Dog place image and target similarity is 0.013" ... Combined with all dog reports, a dog has the highest degree of similarity, the similarity of dog number third is the lowest, so we have to redistribution of police force, is the so-called good steel used in the blade, we have the highest similarity of dogs put more dogs, in the lowest similarity of the dog where less dogs, and even the original dog also withdrawn. This is sampling importance resampling, which is resampling based on importance (more important to the dog).

(2)-> (3)-> (4)-> (2) is repeated cycle, which completes the dynamic tracking of the target.


Simple realization of particle filter

CLC
Clear all;
Close all;
x = 0; % Initial value
R = 1;
Q = 1;
tf = 100; % Trace Time length
N = 100; % Number of particles
P = 2;
Xhatpart = x;
For i = 1:n
Xpart (i) = x + sqrt (p) * randn;% the initial state obeys the 0 mean and the variance is the Gaussian distribution of sqrt (p)
End
Xarr = [x];
Yarr = [x^2/20 + sqrt (R) * RANDN];
Xhatarr = [x];
PARR = [P];
Xhatpartarr = [Xhatpart];
For k = 1:TF
x = 0.5 * x + * X/(1 + x^2) + 8 * cos (1.2* (k-1)) + sqrt (Q) * RANDN;
%k moment True Value
y = x^2/20 + sqrt (R) * RANDN; %k Time Observation value
For i = 1:n
Xpartminus (i) = 0.5 * Xpart (i) + * Xpart (i)/(1 + xpart (i) ^2) ...
+ 8 * cos (1.2* (k-1)) + sqrt (Q) * randn;% sampling to obtain n particles
Ypart = Xpartminus (i) ^2/20;% each particle corresponds to the observed value
Vhat = likelihood between y-ypart;% and real observations
Q (i) = (1/SQRT (R)/sqrt (2*PI)) * EXP (-VHAT^2/2/R);
% likelihood similarity of each particle
End
qsum = SUM (q);
For i = 1:n
Q (i) = Q (i)/qsum;% weight normalized
End
For i = 1:n% to resample based on weight value
u = rand;
qtempsum = 0;
for j = 1:n
Qtempsum = Qtempsum + q (j);
If Qtempsum >= u
Xpart (i) = Xpartminus (j);
Break
End
End
End
Xhatpart = mean (Xpart);
% The final state estimate is the average of n particles, where the weights of each particle are the same after resampling
Xarr = [Xarr x];
Yarr = [Yarr y];
% Xhatarr = [Xhatarr xhat];
PARR = [PArr P];
Xhatpartarr = [Xhatpartarr Xhatpart];
End
t = 0:TF;
Figure
Plot (t, Xarr, ' B. ', T, Xhatpartarr, ' K ');
Legend (' real value ', ' Estimated value ');
Set (GCA, ' fontsize ', 10);
Xlabel (' time Step ');
Ylabel (' state ');
Title (' Particle filter ')
Xhatrms = sqrt ((Norm (Xarr-xhatarr)) ^2/TF);
Xhatpartrms = sqrt ((Norm (Xarr-xhatpartarr)) ^2/TF);
Figure
Plot (T,abs (Xarr-xhatpartarr), ' B ');
Title (' The error of PF ')





Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.