Decimal binary
0.1 0.0001 1001 1001 1001 ...
0.2 0.0011 0011 0011 0011 ...
0.3 0.0100 1100 1100 1100 ...
0.4 0.0110 0110 0110 0110 ...
0.5 0.1
0.6 0.1001 1001 1001 1001 ...
So for example 1.1, the program can actually not really represent ' 1.1 ', but only to a certain extent accurate, which is unavoidable precision loss:
1.09999999999999999
The problem in JavaScript is even more complicated, and here's just some test data in chrome:
Input and output
1.0-0.9 = = 0.1 False
1.0-0.8 = = 0.2 False
1.0-0.7 = = 0.3 False
1.0-0.6 = = 0.4 True
1.0-0.5 = = 0.5 True
1.0-0.4 = = 0.6 True
1.0-0.3 = = 0.7 True
1.0-0.2 = = 0.8 True
1.0-0.1 = = 0.9 True
Solve
So how do you avoid this type of 1.0-0.9!= 0.1 bug-type problem? The following is a more current solution, which reduces the precision of the results before judging the results of floating-point operations, because the process of narrowing the precision will always be rounded automatically:
Copy Code code as follows:
(1.0-0.9). toFixed (digits)//toFixed () Precision parameters must be between 0 and 20
Parsefloat (1.0-0.9). toFixed (10)) = = 0.1//result is true
Parsefloat (1.0-0.8). toFixed (10)) = = 0.2//result is true
Parsefloat (1.0-0.7). toFixed (10)) = = 0.3//result is true
Parsefloat (11.0-11.8). toFixed (10)) = = 0.8//result is true
method to Refine
Copy Code code as follows:
Determine whether values are equal through the IsEqual tool method
function IsEqual (number1, number2, digits) {
digits = digits = = undefined? 10:digits; Default precision is 10
return number1.tofixed (digits) = = number2.tofixed (digits);
}
IsEqual (1.0-0.7, 0.3); return True
Native extension mode, more like the object-oriented style
Number.prototype.isEqual = function (number, digits) {
digits = digits = = undefined? 10:digits; Default precision is 10
return this.tofixed (digits) = = number.tofixed (digits);
}
(1.0-0.7). IsEqual (0.3); return True