This article is mainly on the JavaScript to avoid the error of digital calculation method is introduced, the need for friends can come to the reference, I hope to help you
What if I ask you 0.1 + 0.2 equals a few? You may send me a supercilious eye, 0.1 + 0.2 = 0.3 Ah, that still ask? Even kindergarten children will answer such a childish question. But you know, the same problem is in the programming language, maybe it's not as simple as it might seem.
Don't believe it? Let's take a look at JS first.
var NumA = 0.1;
var NumB = 0.2;
Alert (NumA + numB) = = 0.3);
The execution result is false. Yes, when I first saw this code, I took it for granted that it was true, but the execution turned me down, did I open the wrong way? No, it's not. Let's try the following code again to see why the result is false.
var NumA = 0.1;
var NumB = 0.2;
Alert (NumA + NumB);
Originally, 0.1 + 0.2 = 0.30000000000000004. Isn't it wonderful? In fact, for floating-point arithmetic, almost all programming languages have problems with similar precision errors, but in C++/c#/java these languages have encapsulated a method to avoid the problem of precision, while JavaScript is a weak type of language, There is no strict data type on floating-point number from the design idea, so the problem of precision error is especially prominent. Here is an analysis of why there is this precision error, and how to fix this error.
First of all, we have to stand on the computer's point of view 0.1 + 0.2 This seemingly pediatric problem. We know that the computer can read the binary, not the decimal, so we first convert 0.1 and 0.2 into a binary look:
0.1 => 0.0001 1001 1001 1001 ... (Infinite Loop)
0.2 => 0.0011 0011 0011 0011 ... (Infinite Loop)
The decimal portion of a double-precision floating-point number supports up to 52 bits, so the two add up to a binary number that is truncated by the number of floating-point decimal places, and then a string of 0.0100110011001100110011001100110011001100110011001100. , we convert it to decimal, and it's 0.30000000000000004.
So, how does that solve the problem? The result I want is 0.1 + 0.2 = 0.3 AH!!!
The simplest solution is to give clear precision requirements, and in the process of returning a value, the computer will automatically be rounded up, for example:
var NumA = 0.1;
var NumB = 0.2;
Alert (parsefloat (NumA + NumB). ToFixed (2)) = = 0.3);
But obviously this is not a permanent method, if there is a way to help us solve the problem of the accuracy of these floating-point numbers, that would be good. Let's try the following method:
Math.formatfloat = function (f, digit) {
var m = math.pow (digit);
return parseint (f * m, ten)/m;
}
var NumA = 0.1;
var NumB = 0.2;
Alert (math.formatfloat (NumA + numB, 1) = = 0.3);
What does this method mean? In order to avoid the difference in precision, we need to calculate the number multiplied by 10 of the N power, converted into a computer can be accurately recognized by the numbers, and then divided by 10 of the N power, most programming languages are such processing precision differences, we will borrow to deal with JS The precision error of floating-point numbers in.
If the next time someone asks you 0.1 + 0.2 equals a few, you have to answer carefully!!