Note: The following is organized in the July algorithm April 2016 class training handout, see: http://www.julyedu.com/
Content Introduction:
A. Important statistics
B. Important theorems and inequalities
C. Parameter estimation
A. Important statistics
First, probability and statistics
Probability: The distribution of known population, the probability of calculating events
Statistics: The overall distribution is unknown and the overall distribution is estimated by sample values
The relationship between probability statistics and machine learning
1. The statistics estimate is the distribution, and machine learning trains out the model, and the model may contain multiple distributions.
2. A core evaluation indicator of the training and forecasting process is the error of the model.
3. The error can be in the form of probability, which is closely related to probability.
4. The error is defined in different ways, and the loss function is defined differently.
Three, expectations: Weighted average value
Discrete type definition:
Continuity definition:
The nature of the expectation
1.E (C) =c
2.E (CX) =ce (X)
3.E (x+y) =e (X) +e (Y)
Prove:
4. If X and Y are independent, then E (XY) =e (x) e (Y)
Prove:
Iv. variance: Degree of deviation from expectation
Variance Properties:
1.D (c) =0
2.D (x+c) =d (X)
3.D (KX) =k^2*d (X)
4.D (x+y) =d (x) +d (y) +2e{[x-e (x)][y-e (y)}
If X and Y are independent, then D (x+y) =d (X) +d (y)
The proof is as follows:
V. Covariance:
Definition: Cov (x, y) =e{[x-e (×)][y-e (y)}=e (XY)-E (X) e (Y)
Covariance properties:
Cov (x, y) =cov (y,x)
Cov (ax+b,cy+d) =accov (x, y)
Cov (x+y,z) =cov (x,z) +cov (y,z)
Vi. independence, mutual exclusion and irrelevant
Independent definition: P (XY) =p (X) p (Y)
Mutex definition: P (XY) =0
Not relevant definition: Cov (x, y) =0
X and Y Independent
=>e (XY) =e (X) E (Y)
=>cov (x, y) =0
=>x and y are not relevant
Therefore, X and Y can be independent of the two are not relevant, and vice versa.
Irrelevant essentially refers to linear independence, where there is no linear relationship between X and Y, but there may be other relationships, so x and Y are not guaranteed to be independent.
However, in particular, for two-dimensional normal random variables, the x and Y correlations are equivalent to X and Y Independent.
Seven, covariance matrix:
Set n random variables (x1,x2,.... Xn), Cij=cov (XI,XJ) is present, then it is called matrix
is the covariance matrix. Because of cij=cji, the above matrix is the match matrix.
Viii. Upper bound of covariance
When and only if x, Y have a linear relationship, the equals sign is established.
Nine, correlation coefficient:
Ten, Moment
For the random variable x,x, the K-Order Origin moment is:
The K-Order center distance of X is:
Note: The expectation is the first order Origin moment and the variance is the second center distance.
Not to be continued ...
2. Mathematical Statistics and parameter estimation