From Bayesian to particle filter--round 2

Source: Internet
Author: User

Particle filter is really a very complex thing, from the contact particle filter to now half a month, Bo master oh le wow read n more articles, check a little more information, a lot of content is looked and looked at, carefully. Today, it is here to verify their own cultivation results, please heroes a lot of advice.

Before we talk about particle filtering, we have to start with something called "Bayesian filtering", because particle filtering is based on Bayesian filtering. Say too much abstract things also difficult to understand, to target tracking as an example, directly to see how this thing is going to:

1. First, we set up a dynamic system to describe the transformation of the tracking target on a continuous time series. Simply put, we use the location of the target (x, y) as the state of the dynamic system.

How do you describe it?

We use the State XT to describe the state of the system at the moment T, in this case, xt= (XT,YY), using YT to represent the observed value at the moment T target. Note here that the XT is the location of the target in the model we built, and the actual position of the target is not necessarily equal to it. To give a simple example is: A car to do a uniform acceleration of linear motion, XT is the formula we use to calculate the position of the car, YT is the location we use GPS positioning.

Due to the existence of the error, the results obtained by the theoretical formula of light must be biased. However, the GPS card and slow, to rely on it to complete our target tracking seems to be not very reliable. Let's make a compromise by first calculating the position of the target with the theory and then correcting it with the observed value, which makes our model more perfect.

Amount Is it perfect? No shame! Your theory is not accurate, and then also rely on the card and slow GPs to correct, but also the nerve to tell me that perfect! Why don't you go to heaven and shoulder to the sun. Besides, your revised position is not necessarily accurate, I say the target may be in the calculated position, it may also be in its east, west, south, north, southeast side ...

A ka-ka! All right, I'm wrong. You are really impatient, drink a tea to calm down, listen to me carefully. In front of that uncle said the sensible, we have to listen to, well, I guess he should be the descendants of the sun, haha ha! Off the topic.

Oh, yes! The location of the target may be in many places, there are various possibilities, after all, we do not get an accurate value. Hey! Then we can use the probability to describe Ah! This uncertainty is not what is said in probability theory? My God, I'm so excited, the probability theory of practicing for years finally comes in handy. And you see Oh, using the observed value of YT to modify the XT, this is not to say first to obtain a priori probability P (XT), and then obtain a more abundant information yt, the prior probability is modified to obtain a posteriori probability P (xt|yt)? Wow, whoa! Bayesian, Bayesian, this is Bayesian ah, conditional probability AH!!!

2. Bayesian filtering

Yes, yes, yes, it's a positive solution upstairs. From the Bayesian theory point of view, the state estimation problem (target tracking, signal filtering) is based on the previous series of data y1:t (posterior only) recursive calculation of the current state XT credibility. This credibility is the probability formula P (xt|y1:t). Bayesian filtering recursively calculates the confidence of the XT by predicting and updating these two steps.

The prediction process uses the system model to predict the prior probability density of the state XT, that is, to guess the state of the future system by prior knowledge.

The update process is to modify the prior probability density by using the new observed value YT to obtain the posterior probability density.

3. Formulas and derivation

The Bayesian filter formula is Jiangzi:

Forecast:

    

Update:

    

Before deduction, some preparatory knowledge is still needed. Bayesian formula (the conditional probability formula), the full probability formula, the concept of sample space and the concept of complete event group. This knowledge is particularly important for understanding the derivation process, and it is recommended that you understand these concepts first.

Also mention that the problem of state transition in dynamic systems is generally assumed to be subordinate to the first-order Markov (Markov) model, i.e.

① the state XT of the current moment is only related to the state xt-1 of the previous moment;

The observed value of ②t time is only related to the current state XT.

The following is the derivation of the Bayesian filter formula:

  Forecast :

  

Ha ha! Bo Master more lazy, direct, the word is more ugly, forgive me. The first line is an application of a full probability formula, and then the second and fourth lines are conditional probabilities! The last line is based on the hypothesis ①.

Here you may have a problem, since it is said that the state of the previous moment of the XT only with the state of the previous moment xt-1, with YT not half a dime relationship, that is P (xt|xt-1). So what do you get a P (xt|y1:t-1) That means a few?

Mo said you have to ask, I was also tangled for a long time. In the final analysis, the two probability formulas have different meanings. P (xt|xt-1) is purely based on the model of the prediction (calculation), PA, xt-1 into the formula, the XT came out, simple and clear. P (xt|y1:t-1) This one, is that since we have got a set of data, which is related to the state of the system, then we can be based on these data to guess, just guess. (Landlord yy: that ...) Do you think I guess? )

  Update:

  

The third line is based on the hypothesis ②, the other is all the application of conditional probability formula.

Here you should like me, there is also a problem, that is, since the YT is only related to XT, the denominator of P (yt|y1:t) why not directly written P (YT) Ah!!!

In fact, I don't know about this question. Why don't you study it, and then please let me know.

However, when Bayesian filtering encounters particle filtering, these derivations are completely unimportant. Haha, huh! That means we have been pushing for a long, useless, so want to cry ...

Particle filtering uses n weighted samples (i.e. particles) to approximate the posterior probability density p (xt|y1:t). Because some problems system state transformation is difficult to model well, the formula is not, XT can't produce Ah! So the sample is scattered and the distribution of samples is used to approximate the true distribution of the State XT.

If you anticipate your funeral, listen to tell.

From Bayesian to particle filter--round 2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.