Computer Vision: Tracking Objects Based on Kalman Filter

Source: Internet
Author: User

Estimator

We hope that we can use the measurement results to estimate the motion of a moving object to the maximum extent. Therefore, the accumulation of multiple measurements allows us to detect some observed tracks that are not affected by noise. A key additional element is the motion model of the moving object. With this model, we can know not only the location of the moving object, but also the parameters that we observe to support the model.
This task is divided into two phases. In the first stage, that isPrediction stage, And uses the information obtained from the past to further revise the model to obtain the next occurrence location of a person or object. In the second stage, that isCalibration phase, We get a measurement, and then adjust it with the predicted value (that is, the model) based on the previous measurement. The method for completing two-phase estimation tasks can be used as an estimator.

Introduction to Kalman Filtering

Before introducing the principles of Kalman filter, let's take a look at the functions and functions of the algorithm from a popular example to better understand it. The following is an online document.
Suppose we want to study the temperature of a room. According to your experience, the temperature in this room is constant, that is, the temperature in the next minute is equal to the current temperature in this minute (assuming that we use one minute as the unit of time ). If you do not believe in your experience for 100%, there may be up or down deviations. We regard these deviations as white Gaussian noise, which is unrelated to the time before and after and conforms to the Gaussian distribution ). In addition, we put a thermometer in the room, but this thermometer is not accurate, the measured value will be different from the actual value. We also regard these deviations as Gaussian white noise.
Now we have two temperature values for the room for a minute: Empirical forecast value (system forecast value) and thermometer value (measurement value) . Next we will use these two values in combination with their respective noise to estimate the actual temperature of the room.
Assume that we want to estimate the actual temperature at K time. First, you need to predict the temperature at K time based on the temperature value of the K-1 moment. Because you believe that the temperature is constant, so you will get the K-moment temperature forecast is the same as the K-1 moment, assuming 23 degrees, at the same time, the deviation of the Gaussian noise value is 5 degrees (5 is obtained as follows: if the deviation of the optimum temperature value estimated at the K-1 moment is 3, your uncertainty about your prediction is 4 degrees, and they are square and then square, that is, 5 ). Then, you get the temperature value at K moment from the thermometer, which is assumed to be 25 degrees, and the deviation of this value is 4 degrees.
The actual temperature used to estimate K-moment has two temperature values: 23 degrees and 25 degrees. What is the actual temperature? Believe in yourself or believe in a thermometer? We can use their covariance to determine who believes more. Because kg = 5 ^ 2/(5 ^ 2 + 4 ^ 2), so kg = 0.61, we can estimate the actual temperature value of K moment: 23 + 0.61(25-23) = 24.22 degrees. As you can see, becauseThe covariance of the thermometer is relatively small (I believe the thermometer), so the estimated optimal temperature value is biased towards the value of the thermometer..
Now we have obtained the optimal temperature value at K moment. The next step is to enter k + 1 moment for a new optimal estimate. So far, it seems that nothing about auto-regression has appeared. By the way, before entering the k + 1 moment, we need to calculate the deviation of the optimal value (24.22 degrees) at the K moment. The algorithm is as follows: (1-kg)
5 ^ 2) ^ 0.5 = 3.12. 5 here is the deviation of the 23-degree temperature value you predicted at K time above, and the resulting 3.12 is Deviation of the optimal temperature value estimated at K time point after k + 1 (Corresponding to 3 above ).
In this way, the Kalman Filter continuously recursion covariance to estimate the optimal temperature value. It runs very fast, and it only keeps the covariance of the previous moment. The above kg is the Kalman gain (Kalman gain ). He can change his value at different times. Is it amazing!

Introduction to Kalman Filter

Kalman filtering is an efficient recursive filter that can estimate the state of a dynamic system from a series of incomplete and noise-inclusive measurements. A typical example of Kalman filter is to predict the coordinates and velocity of an object's position from a finite set of observed sequences (which may be deviated) that contain noise.
The basic idea of Kalman filter is that if there is a set of strong and reasonable (reasonable means: "The limit is very loose, making this method useful to a considerable number of practical problems in the real world ") the assumption that, given the historical measurement value of the system, you can establish a system state model that maximizes the posterior probability of these previous measurements.
In addition, you do not need to store a long measurement history. We can also maximize the posterior probability, that is, update the system state model repeatedly and save the model for the next update only.

Application Instance

A simple application is to estimate the location and velocity of an object. A brief description is as follows: Suppose we can obtain a series of location observation data of an object containing noise, we can obtain the exact speed and position of this object for continuous update information.
For example, for a radar, what we care about is tracking the target, while the measured values of the target location, speed, and acceleration contain errors at all times,The kalman filter uses the dynamic information of the target, removes the noise impact, and obtains the good position estimation (filtering) and future position estimation (prediction) of the target at the moment ), it can also be an estimation of the past position (interpolation or smoothing ).

Three important assumptions

The kalman filter requires three important assumptions:

  • the system to be modeled is linear
  • the noise that affects measurement is white noise
  • noise is essentially Gaussian distribution

The first assumption means that the system state at K moment can be expressed by the product of a matrix and the system state at K-1 moment. The remaining two assumptions are assumed that the noise is the white noise of Gaussian distribution, which means that the noise is irrelevant to time and only the mean and covariance are used (the noise is completely described by the first moment and second moment) you can accurately model the amplitude.
Given the three assumptions, the Kalman filter is the best method to combine the data obtained from different sources or from a unified source at different times.Obtain new information from the information we know, and then update the information we know with the combination of old information and old information with Weight Based on the determination degree of the old information and new information.

Basic Dynamic Mathematical Knowledge Model of Kalman Filter

Kalman filter is built on Linear Algebra and Hidden Markov Model (Hidden Markov Model. The basic dynamic system can be represented by a Markov chain, which is built on a linear operator disturbed by Gaussian noise (I .e. Normal Distribution noise. The state of the system can be expressed by a vector whose elements are real numbers. With each increase in discrete time, this linear operator will act on the current state, generate a new State, and bring in some noise, at the same time, the control information of some known controllers of the system will also be added. At the same time, another linear operator affected by noise generates visible output of these hidden states.

Kalman filter can be viewed as a hidden Markov model. The obvious difference is that the value space of the hidden state variable is a continuous space, while the discrete state space is not, the hidden Markov model can describe an arbitrary distribution of the next state, which is opposite to the Gaussian Noise Model Applied to the Kalman filter.

to estimate the internal state of the observation process from a series of noise observations, Kalman filter is applied. We must establish a model for this process under the Kalman Filter framework, which means that for each k step, we need to define:
assume that the real state of the K-Moment of the Kalman filter comes from the K-1 moment,
X [k] = f [k] X [k-1] + B [k] U [k] + W [k]

  • F [k] is a state transfer model (State Transfer Matrix) used in the previous state.
  • B [k] is used to control the input model (input and output matrix) on the control vector u [K ), U [k] is used to allow external control to be applied to the system
  • W [k] is a process noise. If the mean value is 0 and the covariance is Q [K], W [k] ~ N (0, Q [k])

Assume that the real State X [k] is observed at K moment,
Z [k] satisfies the formula Z [k] = H [k] X [k] + V [k]

  • H [k] is an observation model (observation matrix) that maps the real state to the observation space.
  • V [k] is the observed noise. Assume that the mean value is 0, the variance is the Gaussian white noise of R [K], and V [k] ~ N (0, R [k])

Model diagram:


Dynamic System Model

Shows the basic dynamic system model of Kalman filter. The circle represents the vector, the square represents the matrix, and the asterisk represents the Gaussian noise. Its covariance is marked at the bottom right.

  • The initial state and the noise vectors {x0, W1,..., wk, V1... VK} at each time point are considered independent of each other.
  • In reality, dynamic systems in the real world do not strictly conform to this model, but the Kalman model is designed to work in the noise process. An approximate match can make this filter very useful.

Kalman model

In the Kalman filter application, we will consider three kinds of motion.
Dynamic Motion
This kind of movement is the direct result of the system status we expected during the previous measurement.
Control Motion
This kind of movement is expected because some known external factor is applied to the system for some reason. One of the most common examples of control motion is that when we estimate the state of a system that is under control, we know what kind of operation our control will make the system.
Random motion
Even in the simplest one-dimensional condition, if the observed target has the possibility of motion for any reason, it is necessary to include this random motion in the prediction phase.This random motion affects simply increasing the covariance of state estimation over time.

Formula

The Kalman filter is a recursive estimation, that is, the estimation of the current State can be calculated as long as the State Estimation at the previous moment and the observation of the current State are known, different from other estimation techniques, the kalman filter does not need to observe or/or estimate historical records. The Kalman filter is a pure time domain filter. Unlike the low-pass filter and other Frequency Domain Filters, it needs to be designed in the frequency domain, then, the application in the domain is converted.


Kalman variable

Kalman filtering involves two phases: prediction and updating. In the estimation phase, the filter uses the estimation of the previous State to estimate the current state. In the update phase, the filter uses the observed values in the current State to optimize the predicted values in the prediction phase to obtain a more accurate estimation of the current state.

Prediction
Prediction Formula update
Update the basic concept diagram of a formula
Step 1: Use Kalman programming in opencv

The KALMAN class needs to initialize the variable.:
Transfer Matrix, measurement matrix, control vector (0 if not), process noise covariance matrix, measurement noise covariance matrix, posterior error covariance matrix, value after correction of the previous state, current observed value.

void KalmanFilter::init(int dynamParams, int measureParams, int controlParams=0, int type=CV_32F)//Parameters:    //dynamParams – Dimensionality of the state.//measureParams – Dimensionality of the measurement.//controlParams – Dimensionality of the control vector.//type – Type of the created matrices that should be CV_32F or CV_64F.
Step 2

Call the predict method of the Kalman class to obtain the state prediction matrix.
The formula for calculating the predicted status is as follows:
Predicted state (x' (k): x' (K) =X (k-1) + BU (k)
Where X (k-1) is the positive value of the previous State, the first cycle has been given in the initialization process, and the Kalman class in the subsequent loop will be calculated internally. A, B, u (k) are also given values. In this way, the predicted value of the system status is X' (k.

const Mat& KalmanFilter::predict(const Mat& control=Mat())//Parameters:    control – The optional input control
Step 3:

Call the correct method of the Kalman class to obtain the state variable value matrix after the observed value correction is added.
The formula is as follows:
Corrected state (x (k): X (K) = x' (k) + K (k)(Z (k)-HX' (k ))
Here, x' (k) is the result calculated in step 2, and Z (k) is the current measurement value, which is the input vector after the external measurement. H initializes the given measurement matrix for the Kalman class. K (k) is the Kalman gain, and its formula is:
Kalman gain matrix (K (k): K (K) = P' (k)HTInv (HP' (k)HT + r)
The variable on which the gain is calculated is either given in initialization or can be calculated using other formulas in Kalman theory.

const Mat& KalmanFilter::correct(const Mat& measurement)//Parameters:    measurement – The measured system parameters

After step 3, we re-obtain the calibration value at this time point, and then continue to cycle Step 2 and Step 3 to complete the Kalman filter process.

References

1. Learn the Chinese version of opencv
2. Learning opencv-Kalman Filtering

Reprinted, please indicate the author Jason Ding and its source
GitHub home (http://jasonding1354.github.io /)
Csdn blog (http://blog.csdn.net/jasonding1354)
Home (http://www.jianshu.com/users/2bd9b48f6ea8/latest_articles)

Computer Vision: Tracking Objects Based on Kalman Filter

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.