Probability of mathematical foundation

Source: Internet
Author: User

Sample Space

For a random test, although the results of the test cannot be predicted before each test, all possible result sets of the test are known, the set of all possible results of random test E is called the sample space of E and is recorded as S. The element of the sample space, that is, each possible result of E, is called a sample point. For example, event E: throwing a coin and observing the front h and the opposite t, s = {h, t }.

Frequency Probability

The frequency describes the frequency of events. The results of Multiple tests are generally obtained.

Probability describes the likelihood of an event in a test.

If the number of tests is sufficient, the frequency will be close to the probability in a certain sense.

Conditional Probability

Set a and B to two events, and P (a)> 0 is called:

$ \ Large P (B | A) =\frac {P (AB)} {P (a)} $

The probability of Event B occurring when event a occurs.

Multiplication theorem

Set P (a)> 0, then:

$ \ Large P (AB) = P (B | A) P (a) $

$ \ Large P (ABC) = P (c | AB) P (B | A) P (a) $

This theorem can easily be applied to multiple events.

Addition Theorem

Set the sample space of test E to S and A to E. For example, $ B _1 $, $ bbb$, $ \ ldots $, $ B _n $ is a division of S, and $ P (B _ I)> 0 $, then:

$ \ Large P (A) = P (A | B _1) P (B _1) + P (A | B _2) P (B _2) + \ ldots + P (A | B _n) P (B _n) $

Bayesian Formula

$ \ Large P (B _ I | A) = \ frac {P (A | B _ I) P (B _ I )} {\ sum _ {j = 1} ^ {n} P (A | B _ I) P (B _ I)} $

Anterior probability Posterior Probability

Example: When a device is properly adjusted, the product pass rate is 90%. When a fault occurs, the pass rate is 30%. when the device is started every morning, the probability of a device being well adjusted is 75%, it is known that the first product is a qualified product in the morning. What is the probability of a good device adjustment? If event a is defined as a product qualified, event B is well adjusted for the device, obviously P (A | B) = 0.9, P (A | B ') = 0.3, P (B) = 0.75, P (B ') = 0.25, requires P (B | ). P (B) is called a prior probability. It is obtained based on previous empirical data. P (B | A) is obtained after the first product is qualified for P (B) the correction is called posterior probability. The posterior probability gives us a better understanding of the device situation.

Independent Event

If both events a and B meet

$ \ Large P (AB) = P (a) P (B) $

A and B are mutually independent events. This statement can easily be applied to multiple events.

Random Variable

If the number of random test results is quantified, such as coin throwing, 1 indicates the front, and 0 indicates the opposite. If the result of quantization is represented by a variable X, X is a random variable, which varies according to the experiment results. The formal definition is: set E to a random test and the sample space to S = {e}. If every E belongs to S, there is a real number x (e) corresponding to it, in this way, a single-Value Function x = x (e) defined on S is obtained, which is called a random variable. If X can get a finite number of values or can be an infinite number of columns, X is called a discrete random variable.

Probability Distribution

If all values of discrete random variable X are $ X_k (k = 1, 2,...) $, the probability of each value of X is:

$ \ Large P \ {x = X_k \} = P_k $

It is called the probability distribution or distribution law of discrete random variable X.

Distribution Functions

For non-discrete random variable X, its possible values cannot be listed one by one, so it cannot be described by the distribution law like the discrete random variable. Therefore, the probability of the random variable distribution function is introduced.

Set X to a random variable, and X to any real number. function:

$ \ Large f (x) = p \ {x \ Leq x \} $

It is called the distribution function of X. Although discrete random variables can be fully described by the distribution law, for mathematical unification, a distribution function is defined for both discrete random variables and non-discrete random variables.

Probability Density of Continuous Random Variables

If the distribution function of the random variable X is f (x), there is a non-negative function f (x), so that any real number X has:

$ \ Large f (x) =\int _ {-\ infty} ^ {x} f (t) dt $

X is a continuous random variable, and f (x) is a probability density function of X.

Probability Density has the following properties:

(1) $ \ large f (x) \ geq 0 $

(2) $ \ large \ int _ {-\ infty} ^ {\ infty} f (x) dx = 1 $

(3) $ \ large P \ {X_1 <X \ Leq X_2 \} = f (X_2)-f (X_1) = \ int _ {X_1} ^ {X_2} f (x) dx $

Expected

Set the distribution law of random variable X:

$ \ Large P \ {x = X_k \} = P_k $

If the level

$ \ Sum _ {k = 1} ^ {\ infty} X_k P_k $

Absolute Convergence is called the expectation of random variable X. As E (X ).

The probability density of the continuous random variable X is f (x), which is expected to be:

$ \ Large \ int _ {-\ infty} ^ {\ infty} XF (x) dx $

If the Function Y = g (x) exists, the expectation of Y is:

$ \ Large \ int _ {-\ infty} ^ {\ infty} g (x) f (x) dx $

Expectation is also called mean.

Variance

Set X to a random variable. If $ e \ {[X-E (x)] ^ 2 \} $ exists, it is called the variance of X, which is recorded as d (x) or VAR (X ).

The variance can be calculated based on the formula $ d (x) = E (x ^ 2)-[E (x)] ^ 2 $.

The variance kaifang $ \ SQRT {d (x)} $ is recorded as $ \ sigma (x) $, which is called the standard deviation or mean variance.

Moment

Set X to a random variable.

K-order original moment of X: $ E (x ^ K) $

K-order center moment of X: $ e \ {[X-E (x)] ^ k \} $

Obviously, X expects the first-order original moment of X, and the variance is the second-order center moment of X.

 

Common Probability Distribution

0-1 distribution bernuoli Distribution

Random random variable probability distribution. random variable X can only take 0 and 1 values. Its Distribution Law is

$ \ Large P \ {x = k \} = P ^ K (1-p) ^ {1-k}, K = 0, 1 $

 

Binary Distribution

The random variable X indicates the number of times event a occurred in the n-weight bernuoli test. For example, the number of times a positive occurs when n coins are repeatedly discarded. The distribution law of X is:

$ \ Large P \ {x = k \} = {n \ choose k} P ^ K (1-p) ^ {n-k}, K = 0, 1, 2 ,..., N $

 

Probability of mathematical foundation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.