For a certain amount, such as the number of people in a class, there are 5 apples on the table. Are more definite things that are already there. The probability of the study is uncertainty and non-deterministic measurement, the indefinite amount represents the things that did not happen, and random events (strictly speaking, the probability of the study is one of uncertainty, that is, in a certain result range of uncertainty). and the non-quasi-measurement represents that although the amount of accurate values, but because of technical or operational factors, can not be measured by its true value, such as the length of a pen, theoretically you can not measure its true value. The following three scenarios are summed up:
1) The random test represented by the coin toss.
2) The problem between the sample and the whole
3) Approximate value and real value
Of course, in reality, the above 3 types of problems are often intertwined.
For the 1th kind of problem, the relationship between frequency and probability is determined. is essentially a measurement problem, because the frequency is easy to obtain, and the real probability (real value) is not easy to get. But in practical application, we often use frequency substitution probability. Logically, however, this is clearly not rigorous enough and seems to lack the necessary axioms. And the strong number theorem is used to solve this problem. When the number of random trials n tends to infinity, the frequency is stable to the probability. Of course, in practical applications, N does not need to really tend to infinity.
For the 2nd type of problem is reflected in the inability to study the overall situation (such as the total is unlimited, and human capacity and energy are limited, or too much, measurement costs are not cost-effective, etc.), we can only take part (sample) To do research, through the study of the sample to achieve the overall study. This creates the question of whether the study of the sample can replace the study of the whole. The answer, of course, is yes in most cases, because we do it often. But from the scientific rigor, we still need to be able to prove that this way is possible. And the limit theorem is the theoretical basis of this kind of research behavior. Of course, it should be stated that not all samples can be substituted for the overall study. The two basic conditions of the limit theorem are: independent, random. In fact, it is suggested that the way to take samples for the overall study is conditional.
For the 3rd kind of problem, it is reflected that since the real value cannot be obtained, it is substituted by the approximate value, especially the mean of multiple measurements. This is what we often do, to measure several times, take an average, instead of the real value. This inevitably leads to a problem, after all, is this average an approximation that can do something instead of the real value? Say No. The problem is also dependent on the limit theorem. While there is a theoretical basis for replacing real values with measured averages, it is important to note that measurement errors must be random and unintentional. This hypothesis is very important, and if your measurements do not meet this situation, the limit theorem will not be supported for you.
These seemingly impossible things, which "home" can be made out of the theorem to prove, and indeed can only admire. But think about it too, although the probability of the study is uncertainty, but after all, the uncertainty is regular, at least the scope of the results are known. What if we don't know anything?
Note: This period of time reading task volume is very large, so write relatively little. After the completion of knowledge reconstruction, will continue to share the experience.
??
Probabilistic stochastic 3 (understanding of the limit theorem)