This article is mainly on the JavaScript to avoid the accuracy of the digital calculation method is introduced, the need for friends can come to the reference, I hope to be helpful to everyone
What if I ask you 0.1 + 0.2 equals a few? You may give me a supercilious eye, 0.1 + 0.2 = 0.3 Ah, do you still ask? Even the children in kindergarten will answer such a question of pediatrics. But you know, the same problem is in the programming language, perhaps not as simple as it might seem.
Don't believe me? Let's take a look at JS first.
var NumA = 0.1;
var NumB = 0.2;
Alert ((NumA + numB) = = = 0.3);
The execution result is false. Yes, when I first saw this piece of code, I took it for granted that it was true, but the result of the execution made me plunge my glasses, was it my way of opening it wrong? Not also not also. Let's try the following code again to see why the result is false.
var NumA = 0.1;
var NumB = 0.2;
Alert (NumA + NumB);
Originally,0.1 + 0.2 = 0.30000000000000004. Isn't it a wonderful flower? In fact, for the arithmetic of floating-point numbers, almost all programming languages will have similar accuracy errors, but in C++/c#/java these languages have encapsulated the method to avoid the problem of precision, and JavaScript is a weak type of language, There is no strict data type for floating-point numbers from the design idea, so the problem of precision error is particularly prominent. The following is an analysis of why this accuracy error, and how to repair the error.
First of all, we need to stand on the computer's point of view 0.1 + 0.2 This seemingly pediatric problem. We know that it is binary, not decimal, that can be read by a computer, so let's convert 0.1 and 0.2 into binary:
0.1 = 0.0001 1001 1001 1001 ... (Infinite Loop)
0.2 = 0.0011 0011 0011 0011 ... (Infinite Loop)
The decimal portion of a double-precision floating-point number supports up to 52 bits, so a binary number that is truncated as a result of the limit of the number of floating-point numbers 0.0100110011001100110011001100110011001100110011001100 is added , and then we convert it to decimal, which is 0.30000000000000004.
So, how do we solve this problem? I want the result is 0.1 + 0.2 = = 0.3 Ah!!!
One of the simplest solutions is to give a definite accuracy requirement, and the computer will automatically round up the return value, such as:
var NumA = 0.1;
var NumB = 0.2;
Alert (parsefloat (NumA + NumB). toFixed (2) = = = 0.3);
But obviously this is not a once and for all method, if there is a way to help us solve the problem of the accuracy of these floating-point numbers, that would be good. Let's try this method:
Math.formatfloat = function (f, digit) {
var m = Math.pow (ten, digit);
return parseint (f * m, ten)/m;
}
var NumA = 0.1;
var NumB = 0.2;
Alert (math.formatfloat (NumA + numB, 1) = = = 0.3);
What does this method mean? In order to avoid the difference in precision, we need to calculate the number multiplied by 10 N power, converted to the computer can be accurately recognized by the integer, and then divided by the power of N of 10, most programming languages are such processing precision differences, we borrowed to deal with the accuracy of the floating point in JS error.
If the next time someone asks you 0.1 + 0.2 equals a few, you should be careful to answer!!
A detailed explanation of how JavaScript avoids the error of digital computing accuracy