The probability density function of the multivariate Gaussian distribution and the element Gaussian distribution is given directly:
μ is a D-dimensional mean vector, Σ is a dxd covariance matrix, we only consider positive definite matrices (all eigenvalues are positive), that is, |σ|>0; multi-Gaussian distribution and the Gaussian distribution of the single State is different in form, but in fact the single Gaussian is a multivariate Gaussian distribution with a dimension of 1: When D=1, Σ is a 1x1 matrix (that is, to a number), |σ|1/2=σ (i.e. standard deviation), σ-1=σ-2, (x-μ) t= (x-μ), we are able to match these two formulas nicely.
In covariance matrix σ, the (I,J) element is the covariance of the first and second elements of the x vector:
the covariance matrices we consider here are real symmetric matrices . according to the linear algebra theory, all NXN real symmetric matrices have n linearly independent eigenvectors, and these eigenvectors can be orthogonal to a set of orthogonal and modulo 1 vectors (refer to the wiki spectrum decomposition), taking a two-dimensional matrix as an example:
Therefore, the covariance matrix can be expressed in the following form (X,u, which is the unit orthogonal form of the eigenvector):
And because the middle is a diagonal matrix, it's easy to get its inverse matrix:
We now return to the Gaussian-distributed formula, defining the following two-sub-types:
The proposed covariance matrix of our derivation can be obtained by:
Which is again defined:
= = "
where u is an orthogonal matrix , with two-dimensional data as an example, the Gaussian is distributed as an elliptical surface (all eigenvalues are positive), the Ellipse Center is μ, the ellipse axis is in the direction of the UI, and the scaling factor is λ1/2.
Multivariate Gaussian distribution