By: ysuncn (You are welcome to reprint it. Please specify the original information)
What is standard deviation? According to the definition of the International Organization for Standardization (ISO), the standard deviation σ is the positive square root of the variance σ 2, and the variance is the expected secondary deviation of the random variable.
What is a standard error? After reading some documents, some of them are still Daniel's and their definitions are not uniform. Generally, there are two definitions:
1. The standard error of sample capacity is the standard deviation of the sample divided. PS: Here, some people divide the standard deviation of the sample by N as the standard error (it is estimated that the standard deviation is wrong, but the standard error is estimated based on the overall mean, so there is no need to say wrong );
2. The standard error of a statistic can also be characterized by the standard deviation of the expected error :.
The following is the "standard deviation and standard mistake" from the editorial journal Hao la-kun. The correlation is also greater than that of the author. I hope this will help you.
The standard deviation represents a random error (or real difference) and is the statistical mean of the absolute value of a random error. In the National metering technical specifications, the standard deviation is officially named as the standard deviation, abbreviated as the standard deviation, represented by the symbol σ. There are more than 10 standard deviations, such as the overall standard deviation, Mother standard deviation, root mean square error, root mean square deviation, mean square error, mean square deviation, single dose standard deviation and theoretical standard deviation. The standard deviation is defined as: The sample standard deviation s value is used as the overall standard deviation σ estimated value. The formula for calculating the sample standard deviation is :.
In sampling tests (or repeated equi-precision samples), standard deviations of the sample mean are often used, also known as standard errors of the sample mean or standard errors of mean ). Because the sample standard deviation s cannot directly reflect the sample average? The error between x and the overall mean μ is actually a relative error between the sample mean and the overall mean. The standard error of the available sample average is that its expected value is, which reflects the degree of discretization of the sample average. The smaller the standard error, the closer the sample average and the overall average. Otherwise, the sample average is discrete.
Standard deviation is an indicator of the variation size between individuals. It reflects the degree of discretization of the entire sample on the average sample and is a measure of data precision; the standard error reflects the variation degree of the sample average to the overall average, thus reflecting the sample error size, which is an indicator of the precision of the measurement results.
Exam: http://mathworld.wolfram.com/standarderror.html ......
Can you explain the standard deviation and standard error clearly?