The bug of floating point Precision loss is constantly discovered by developers or database developers in different languages. Some people may have heard about the reasons, but still cannot grasp the rules. The best way to solve the problem is to reproduce the cause of the error at any time, which means that the error is under your control and can be avoided in real development.
Let's look at two scenarios:
One is C # code, which does not output 0.1 as expected, but 0.10000000149011612
Console. writeline (convert. todouble (
0.1f
));
One is the SQL code, which has the same error as the C # code.
Declare
@ F
Float
(
23
)
Declare
@ S
Float
(
53
)
Set
@ F
=
0.1
Set
@ S
=
@ F
Select
@ S
The most authoritative explanation for this error is the ieee754 Floating Point Number Standard. For details, refer to msdn:
Http://msdn.microsoft.com/zh-cn/library/0b34tf65 (vs.80). aspx
If you think his description is too boring or cannot be combined with the above error example, you can refer to my understanding.
Many of the decimal numbers we use cannot be converted into precise binary numbers, including simple decimal places such as 0.1, 0.2, and 0.11. For the IEEE floating point number format, they are infinite cyclic decimal places, 0.25, 0.5, only decimal places like 0.375 can be converted into precise binary decimal places.
For example, if you use the decimal number 0.1 or decimal number 0.375, you can turn it over manually. The idea is very simple, just like learning science for elementary school students.
0.375 = 0.375*2 ^ 2*2 ^ (-2) = 1.5*2 ^ (-2) // The integer is between 1 and 2. We used to do the same in decimal format !!
0.5 = 0.5*2*2 ^ (-1)
0.375 = 2 ^-(2) + 2 ^ (-1 +-2) = 0.011 // It is a finite decimal.
It may be clearer if scores are used.
0.375 = 1/4 + 1/8 = 0.01 + 0.001 = 0.011
---------
0.1 = 1.6 * (1/16)
= 1/16 + 0.6/16
= 1/16 + 1.2/32
= 1/16 + 1/32 + 0.2/32
= 1/16 + 1/32 + 1.6/2 ^ 8
= 1/2 ^ 4 + 1/2 ^ 5 + 1/2 ^ 8 + 0.6/2 ^ 8
... Apparently, as in the second step, the system returns to 0.6, which is an infinite loop decimal.
0.1 = 0.00011001 00011001 00011001 00011001... no matter how long the machine tail is, it will not be accurate. Besides, only 23 or 52
Then let's look at the following code test example:
Using system; using system. collections. generic; using system. text; namespace consoleapplication19 {class program {static void main (string [] ARGs) {// 0.1 of the infinite number of loops, resulting in loss of precision during data type conversion. Console. writeline (convert. todouble (0.1f); // 0.375 of finite decimal places will not appear. Console. writeline (convert. todouble (0.375f); // in actual conditions, two data with different precision may cause problems During computation. // Why is there no loss of precision when the precision is the same? This is optimized. // Take 0.1 as an example. Its memory data is byte [] B = system. bitconverter. getbytes (0.1f); foreach (byte BB in B) {console. write ("{0}", BB);} console. writeline ("/R/N"); // you can see that the first 205 is carried, and the others are 204. This is like the meaning of 3/2 to 0.66667. B [0]-= 1; float F = bitconverter. tosingle (B, 0); // the actual value of F is 0.099999994f, but the last 4 is discarded by default. writeline (f); // Therefore, the system cannot identify the precision between 0.099999994f and 0.1. // Set the breakpoint and check the floating point number below at a time. At 0.099999998f, it will automatically carry 0.1. Float F1 = 0.099999994f; float F2 = 0.099999995f; float F3 = 0.099999996f; float F4 = 0.099999997f; float F5 = Hangzhou; float F6 = 0.09999999999f; console. writeline ("F1: {0}/R/NF2: {1}/R/NF3: {2}/R/nf4: {3}/R/nf5: {4}/R/nf6: {5} ", F1, F2, F3, F4, F5, F6); // use bitconverter. getbytes checks the above six floating point numbers. You will find that F1 to F4 are the same, while F5, F6 and 0.1 are the same. Console. Read ();}}}