1. Concept of Random Variables
As the name implies, a random variable is a variable with a random value. The opposite of a random variable is a "deterministic variable", that is, the variable whose value follows a certain strict rule, such as the distance from Beijing to Shanghai. However, in an absolute sense, many variables are usually regarded as deterministic variables, which are essentially random, but only because of the low random interference, and even within the required precision, we may wish to process the results as deterministic variables.
Based on the full nature of the value that a random variable can fetch, random variables can be divided into two categories: discrete random variables and continuous random variables. However, the concept of continuous variables is only a mathematical abstraction, because any amount has a unit and can only be measured to a certain degree of precision, so it must be discrete.
2 Distribution of Discrete Random Variables
Definition 1.2.1If $ x $ is a discrete random variable, all possible values are $ \ {A_1, A_2, \ dots \} $, then
$ P_ I = p (x = a_ I), I = 1, 2, \ dots $
A probability function called $ x $. And has the following properties:
$ P_ I \ geqslant 0, P_1 + p_2 + \ DOTS = 1 $
The probability function of $ x $ shows how all probability 1 is allocated among its possible values. Therefore, it is also called the "probability distribution" of the random variable $ x $ ".
Definition 1.2.2If $ x $ is a random variable, the function
$ P (x \ Le x) = f (x),-\ infty <x <\ infty $
It is called the distribution function of $ x $.
For discrete random variables, the probability function and the distribution function are equivalent in the following sense.
$ F (x) = p (x \ Le x) = \ sum _ {\{ I: a_ I \ Le x \}} p_ I $
It is obvious to evaluate $ f (x) $ by $ p_ I $, and evaluate $ p_ I $ by $ f (x) $. Pay attention to the following:
$ F (I) = p (x \ le i-1) + p (x = I) $
For any random variable $ x $, its distribution function $ f (x) $ has the following general properties:
1) $ f (x) $ is a single drop rather than a drop: When $(X_1 <x_2) $, $ F (X_1) \ le F (X_2) $;
2) When $ x \ To \ infty $, $ f (x) \ to 1 $; when $ x \ To-\ infty $, $ f (x) \ to 0 $;
3. Two common discrete distributions $ x \ sim B (n, p) $
$ P_ I = B (I; n, p) = \ dbinom {n} {I} P ^ I (1-p) ^ {n-I}, I = 0, 1, \ dots, N $
Poisson distribution
$ X \ sim p (\ lambda) $, here $ \ Lambda> 0 $ is a constant
$ P (x = I) = E ^ {-\ Lambda} \ Lambda ^ I/I! $
Hyper-Geometric Distribution
There are n products, of which m are nonconforming. If $ N $ are randomly selected without replacement, the number of nonconforming products contained in $ x $ is subject to the hyper-geometric distribution, it is recorded as $ x \ sim h (n, n, m) $. The probability distribution column of the Hypergeometric Distribution is:
$ P (x = K) = \ frac {\ dbinom {m} {k} \ dbinom {N-M} {n-k} {\ dbinom {n}, K =, \ dots, r$ $
Where $ r = min \ {M, N \} $, and $ m \ Le N, N \ Le N, m are positive integers $
When $ n \ Gg N $, that is, the number of samples $ N $ is much smaller than the total number of Products N, the failure rate in the body after each extraction $ P = m/N $ changes little, therefore, if sampling is not put back, it can be considered as back-sampling. Here, the Hypergeometric Distribution can be approximate using two-item distribution.
$ \ Frac {\ dbinom {m} {k} \ dbinom {N-M} {n-k} {\ dbinom {n} \ Cong \ dbinom {n} {k} P ^ K (1-p) ^ {n-k}, where p = \ frac {m} {n} $
Geometric Distribution $ x \ SIM Ge (p) $
$ P (x = K) = (1-p) ^ {k-1} P, K =, \ dots $
Non-memory of geometric distribution: If $ x \ SIM Ge (p) $ is set
$ P (x> m + n | x> m) = p (x> N) $
The above formula indicates that in a series of events, if event a does not appear in the previous m experiments, the probability that event a does not appear in the Next n tests is only related to N, it seems that I forgot the results of my previous m tests.
Negative binary distribution $ x \ SIM Nb (R, p) $,
In the bernuoli test sequence, the probability of event a occurring in each test is $ p $. If $ x $ is the number of times the event $ A $ R appears, then $ x $ may be set to $ R, R + 1, \ dots, R + M, \ dots $
$ P (x = K) = \ dbinom {k-1} {R-1} P ^ r (1-p) ^ {k-r}, K = R, R + 1, \ dots $
4. Continuous Random Variable Distribution
For the probability distribution of continuous variables, we cannot describe them using methods like discrete variables. The reason is that the value of this variable is full of intervals and cannot be discharged one by one.
One way to portray the probability distribution of continuous random variables is to use the probability distribution function, but it is more convenient in theory and practice because it is more commonly used, is to use the so-called "probability density function" or density function for short.
Definition 1.4.1Set a continuous random variable X with a probability distribution function $ f (x) $, then $ f (x) $ layers $ f (x) = f' (x) $, it is called the probability density function of X.
The density functions of Continuous Random Variables $ x $ f (x) $ all have the following three basic properties:
1) $ f (x) \ ge0 $
2) $ \ int _ {-\ infty} ^ {\ infty} f (x) dx = 1 $
3) For any constant $ A <B $ P (A \ Le x \ le B) = F (B)-f () = \ int _ {A} ^ {B} (x) dx $
Normal Distribution
We can see from the central limit theorem that:
If a variable is superimposed by a large number of small and independent random factors, it must be a normal variable. Therefore, many random variables can be described or approximate with a normal distribution, such as measurement errors, product weights, human height, and annual rainfall.
If the density function of the random variable $ x $ is
$ P (x) = \ frac {1} {\ SQRT {2 \ PI} \ Sigma} e ^ {-\ frac {(X-\ mu) ^ 2} {2 \ Sigma ^ 2 }},-\ infty <x <+ \ infty $
$ X $ follows the normal distribution or Gaussian distribution.
Uniform Distribution
If the density function of the random variable $ x $ is
$ P (x) =\begin {cases} \ frac {1} {B-a}, & A <x <B ;\\ 0, & others. \ End {cases} $
It is called the even distribution on $ x $ obeying the range $ (a, B) $. It is recorded as $ x \ sim u (a, B) $
Exponential Distribution
If the density function of the random variable $ x $ is
$ P (x) = \ begin {cases} \ Lambda e ^ {-\ Lambda x}, & X \ ge0; \ 0, & x <0. \ End {cases} $
$ X $ follows the exponential distribution and is recorded as $ x \ SIM exp (\ lambda) $
Because Random exponential distribution variables can only take non-negative real numbers, exponential distribution is used for various "Lifetime" distributions, such as the lifetime of electronic components and animals.