Reply content:
Regardless of the number, the computer will eventually be converted to 0 and 1 for storage, so you need to understand the following issues
- How a decimal is converted into binary
- How the binary of floating-point numbers is stored
Binary representation of floating-point numbers
First, we want to understand the floating point binary representation, there are two principles:
- The whole number of parts to 2 and then the reverse order
- The fractional part is multiplied by 2 to take the integer portion, and then in the order of
What is the 0.1 representation?
We continue to calculate by the binary representation of the floating-point number
0.1 * 2 = 0.2 integer part fetch 0
0.2 * 2 = 0.4 integer part fetch 0
0.4 * 2 = 0.8 integer part fetch 0
0.8 * 2 = 1.6 Integer part fetch 1
0.6 * 2 = 1.2 integer part fetch 1
0.2 * 2 = 0.4 integer part fetch 0
...
So you will find that the binary representation of 0.1 is 0.00011001100110011001100110011 ... 0011
0011 the cyclic section of the binary decimal is continuously cycled.
This raises the question that you can never save 0.1 of binary, even if you put all the world's hard drives together, and you can't save 0.1 of binary decimals.
Binary storage of floating-point numbers
Python, like C, uses the IEEE 754 specification to store floating-point numbers. IEEE 754 's storage specification for double-precision floating-point numbers divides the number of bits into 3 parts.
- The 1th bit is used to store the symbol, which determines whether the number is positive or negative.
- Then use one bit to store the exponential portion
- The remaining bit is used to store the mantissa.
Double-precision_floating-point_format
And it can be pointed out that the number of double can be stored is limited, double can represent the number must not exceed 2^64, then the real world how many decimals? Infinite number of. The only thing a computer can do is a value that is close to this decimal value, which is equal to the value that the logic thinks of with a certain precision. In other words, each fractional store (but not all) is accompanied by a loss of precision.
Problems with floating point calculation
Now we can look back at the questions you raised.
0.1 + 0.2 = = 0.3
0.1 the real number in the computer store is 0.1000000000000000055511151231257827021181583404541015625.
0.2 is
0.200000000000000011102230246251565404236316680908203125
0.3 is
0.299999999999999988897769753748434595763683319091796875
That's why 0.1 + 0.2! = 0.3 is the reason
As for 1.1 + 2.2 similar to the
First declare that this is not a bug, because of the problem of precision caused by the conversion to decimal to binary! Second, it almost appears in many programming languages: C, C + +, Java, Javascript, Python, exactly: Any programming language that uses the IEEE754 floating-point format to store floating-point types (float 32,double 64) has this problem!
A brief introduction to the IEEE 754 floating-point format: It uses scientific notation to represent floating-point numbers with a decimal number of 2. IEEE floating-point numbers (a total of 32 bits) are represented by 1 digits in the number sign, 8 for the exponent, and 23 for the mantissa (that is, the decimal part). Here the index is stored with the shift code, and the Mantissa is the original code (no sign bit). The reason for the shift code is because the sign bit of the negative number of the code is 0, which guarantees that all the bits of the floating-point number 0 are 0. Double-precision floating-point number (64-bit), represented by a 1-bit sign bit, 11-bit exponent bit, and 52-bit trailing digit.
Because the scientific notation has many ways to represent a given number, it is necessary to normalize the floating-point numbers so that they are represented by a decimal number with a base of 2 and a decimal point to the left of 1 (note binary, so as long as 0 is a 1), adjust the exponent as needed to get the required numbers. For example: the decimal 1.25 = binary 1.01 = = is stored with an exponent of 0, a mantissa of 1.01, and a sign bit of 0. (Decimal to binary)
Back to the beginning, why "0.1+0.2=0.30000000000000004"? First, this is the result of the JavaScript language calculation (note that the numeric type of JavaScript is stored in the 64-bit IEEE 754 format). Just as the same decimal cannot accurately represent 1/3 (0.33333 ...) , the binary also has values that cannot be accurately represented. For example 1/10. In case of 64-bit floating point numbers:
Decimal 0.1=> binary 0.00011001100110011 ... (Cycle 0011)
= = Mantissa is 1.1001100110011001100 ... 1100 (total 52 digits, except for the left of the decimal point 1), the exponent is 4 (binary code is 00000000010), the sign bit is 0=> storage: 0 00000000100 10011001100110011...11001=> Because the mantissa is up to 52 bits, the actual stored value is 0.00011001100110011001100110011001100110011001100110011001
Decimal 0.2=> binary 0.0011001100110011 ... (Cycle 0011)
= = Mantissa is 1.1001100110011001100 ... 1100 (total 52 digits, except for the left of the decimal point 1), the exponent is 3 (binary code is 00000000011), the sign bit is 0=> storage: 0 00000000011 10011001100110011...11001
Because the mantissa is up to 52 bits, the actual stored value is 0.00110011001100110011001100110011001100110011001100110011
Add the two:
0.00011001100110011001100110011001100110011001100110011001 + 0.00110011001100110011001100110011001100110011001100110011 = 0.01001100110011001100110011001100110011001100110011001100
After converting to 10, get: 0.30000000000000004!
RELATED links:
Language Agnostic-is floating point math broken? -Stack Overflow / http stackoverflow.com/quest Ions/588004/is-floating-point-math-broken
Floating point arithmetic and Agent Based Models http://www. macaulay.ac.uk/fearlus/ floating-point/
Because the binary cannot accurately describe decimal decimals. So there is an error in the operation of float.
The results of 1.1+2.2 and 0.1+0.2 are different because of the accuracy problem. Like what:
1.1= 1000000101E-9
0.1=101011....e-15
(The above values may be problematic by hand calculation)
The master can look at how decimals are stored in computer memory.