Tag: Its prompt data requires res present answer system function
14. Floating-point algorithm: disputes and limitations
Floating-point numbers are expressed as binary (binary) decimals in the computer. For example: decimal decimals:
0.125
is the value of 1/10 + 2/100 + 5/1000, the same binary decimals:
0.001
is 0/2 + 0/4 + 1/8. The two values are the same. The only real difference is that the first one is written as decimal decimal notation, and the second is binary.
Unfortunately, most decimal decimals cannot be represented entirely in binary decimals. As a result, the decimal floating-point number that you enter is typically represented by an approximate binary floating-point number that is actually stored in the computer.
This problem was first found in decimal in the early days. Consider the decimal form of 1/3, you can come up with a decimal approximate value
0.3
Or a step further,
0.33
Such. If you write a number of digits, the result is never exactly 1/3, but can be infinitely close to 1/3.
Similarly, no matter how many bits are written in the binary, the decimal number 0.1 cannot be accurately expressed as a binary decimal. Binary to express 1/10 is an infinite loop of decimals:
0.0001100110011001100110011001100110011001100110011 ...
When you stop at any finite number of bits, you get the approximate value. Today on most machines, the approximate use of the number of floating-point numbers with the highest 53 bits is the numerator, and the power of 2 is the denominator. As for 1/10, the second decimal is 3602879701896397 / 2 ** 55
that it is very close but not exactly equal to 1/10 true values.
Many users are unaware of the approximate values due to the way they are displayed. Python is a decimal approximation of binary values stored in a printer-only device. On most machines, if Python is to print a true approximate value of 0.1 stored binary, it will be displayed:
>>> 0.10.1000000000000000055511151231257827021181583404541015625
So many digits are useless for most people, so Python displays a rounded value
>>> 1/100.1
Just remember that even if the printed result appears to be exactly 1/10, the value that is actually stored is the nearest binary decimal.
Interestingly, there are many different decimal numbers that share the same approximate binary decimals. For example, digital 0.1
and 0.10000000000000001
0.1000000000000000055511151231257827021181583404541015625
are 3602879701896397 / 2 ** 55
Approximate value of . Because all these decimal numbers share the same approximation, the identity eval (repr (x)) == x
At the same time, it is possible to display either of them.
Historically, the Python prompt and the built-in repr () function selected a 17-bit precision number 0.10000000000000001
. Starting with Python 3.1, Python (on most systems) can select the shortest one from these numbers and simply display it 0.1
.
Note that this is the natural nature of the binary floating point number: It is not a bug in Python, nor is it a bug in your code. You will see this behavior in all languages that support the hardware floating point algorithm (although some languages may not show the difference by default or in all output modes).
For better output, you might want to use string formatting to generate a valid number for a fixed number of digits:
>>> format (Math.PI, '. 12g ') # give significant digits ' 3.14159265359 ' >>> format (Math.PI, '. 2f ') # give 2 digits after the point ' 3.14 ' >>> repr (math.pi) ' 3.141592653589793 '
It is important to realize that, in the true sense, it is an illusion: you are simply rounding the display of real machine values.
For example, since 0.1 is not an exact 1/10, the addition of 3 0.1 values may not be accurate 0.3:
>>>. 1 +. 1 +. 1 = =. 3False
Also, since 0.1 cannot be closer to the exact value of 1/10 and 0.3 cannot be closer to the exact value of 3/10, rounding with the round () function in advance is not helpful:
>>> round (. 1, 1) + round (. 1, 1) + round (. 1, 1) = = Round (. 3, 1) False
Although these numbers cannot be closer to the exact values they want, the round () function can be used to round up after the calculation, so that inaccurate results can be compared with another:
>>> round (. 1 +. 1 +. 1,) = = Round (. 3, Ten) True
Binary floating-point calculation has many such unexpected results. The problem of "0.1" is explained in detail in the section "Error representation" below. More complete common quirks see the danger of floating-point numbers.
Finally, I would like to say, "There is no simple answer." Also do not too small impatient points! The error source in Python floating-point calculation is to the floating-point hardware, and the error on most machines does not exceed one of 2**53 points per calculation. This is enough for most tasks, but keep in mind that this is not a decimal algorithm, and each floating-point calculation may introduce a new rounding error.
Although there is a problem, for most ordinary floating-point operations, you simply round out the final displayed result to the number of decimal digits you expect, and you get the end result you expect. STR () is usually sufficient, and for better control, refer to the format specifier of the Str.format () method in the format string syntax.
For situations that require an exact decimal representation, you can try using the decimal module, which implements the decimal calculation for applications in accounting and high-precision requirements.
The Fractions module supports another form of operation, and its implementation is based on rational numbers (so numbers like 1/3 can be accurately represented).
If you are a heavy consumer of floating-point operations, you should look at the numerical Python package provided by the SciPy project and other packages for math and statistics. See
When you really want to know the exact value of a floating-point number, Python provides such a tool to help you. The Float.as_integer_ratio () method represents the value of a floating-point number as a fraction:
>>> x = 3.14159>>> x.as_integer_ratio () (3537115888337719, 1125899906842624)
Because the ratio is accurate, it can be used to regenerate the initial value without damage:
>>> x = = 3537115888337719/1125899906842624true
The Float.hex () method represents a floating-point number in hexadecimal, and the exact value that is stored by the computer is also given:
>>> X.hex () ' 0x1.921f9f01b866ep+1 '
A precise hexadecimal representation can be used to accurately reconstruct a floating-point number:
>>> x = = Float.fromhex (' 0x1.921f9f01b866ep+1 ') True
Because it can be represented accurately, it can be used to reliably migrate data between different versions of Python (platform-related) and to exchange data with other languages that support the same format, such as Java and C99.
Another useful tool is the math.fsum () function, which helps reduce the loss of precision in the summation process. When the value is added continuously, it keeps track of "discarded numbers". This can make a difference to the overall accuracy so that errors do not accumulate to the point that affects the final result:
>>> sum ([0.1] *) = = 1.0false>>> Math.fsum ([0.1] *) = = 1.0True
14.1. Expressing errors
This section details the "0.1" example and teaches you how to accurately analyze such cases on your own. Let's say you have a basic understanding of floating-point numbers here.
representation Error mentions the fact that some (actually, most) decimal decimals cannot be represented as binary decimals in precision. This is the root cause of Python (or Perl,c,c++,java,fortran and many other) languages that often do not display decimal values as you would expect.
Why is this? 1/10 cannot be expressed as a binary decimal number accurately. Most of today's Machines (November 2000) use the IEEE-754 floating-point algorithm, on most platforms Python maps floating-point numbers to IEEE-754 "double-precision floating-point numbers." 754 double precision contains 53-bit precision, so the computer tries to convert the input 0.1 to the nearest binary decimal of J/2**N . J is a 53-bit integer. Rewrite:
1/10 ~= J/(2**n)
J ~= 2**N/10
When J reproduces is 53 bits (>= 2**52 rather than < 2**53), the best value for N is 56:
>>> 2**52 <= 2**56// < 2**53true
Therefore, 56 is the only N value that holds the J precision. The best approximation of J is the quotient of the divisible number:
>>> Q, R = Divmod (2**56, ten) >>> R6
Because the remainder is more than half 10, the best approximation is the upper bounds:
>>> q+17205759403792794
So the best approximation of 1/10 in 754 doubles is 2**56, or:
7205759403792794/2 * * 56
The numerator and denominator are divided by 2 to narrow the decimal to:
3602879701896397/2 * * 55
Be aware that since we rounded up, it's actually a little bit larger than 1/10. If we don't round up, it will be slightly smaller than 1/10. But there's no way to make it happen to be 1/10!
So the computer never "knows" 1/10: it encounters the decimal number above, giving the best 754 double-precision real numbers it can get:
>>>. 1 * 2**557205759403792794.0
The fractions and decimal modules make these calculations simple:
>>> from decimal import decimal>>> from fractions import fraction>>> fraction.from_float (0.1 ) fraction (3602879701896397, 36028797018963968) >>> (0.1). As_integer_ratio () (3602879701896397, 36028797018963968) >>> Decimal.from_float (0.1) Decimal (' 0.1000000000000000055511151231257827021181583404541015625 ') >>> format (Decimal.from_float (0.1), '. 17 ') ' 0.10000000000000001 '
Floating point algorithm for the 15th class: Controversy and limitations