var sum = 0;
for (var i = 0; i < i++) {
sum + = 0.1;
}
Console.log (sum);
Does the above program output 1?
In the text of the 25 JavaScript questions you need to know , the 8th question is plain about why the next JS can't handle decimal operations correctly. Today, the old problem again, deeper analysis of the issue.
But first of all, it is not correct to handle decimal operations is not the JavaScript language itself design errors, other advanced programming languages, such as C,java, also can not correctly handle decimal operations:
#include <stdio.h>
void Main () {
float sum;
int i;
sum = 0;
for (i = 0; i < i++) {
sum + = 0.1;
}
printf ('%f\n ', sum); 10.000002
}
The representation of numbers within a computer
We all know that programs written in the Advanced programming language need to be interpreted, compiled, and other operations into the CPU (the processing unit) can recognize the machine language to run, but for the CPU, it does not recognize the number of decimal, octal and 16 in the system, etc. The numbers that we declare in the program are converted into binary numbers for operation.
Why not convert it to a three-digit number for operation?
The interior of the computer is made up of a lot of IC (Integrated circuit: Integrated circuit) This electronic component, it looks like this:
The IC has many shapes, with a lot of pins lined up on either side or inside of it (the illustration only draws one side). IC of all pins, only DC voltage 0V or 5V two states, that is, an IC pin can only represent two states. This feature of the IC determines that the data inside the computer can only be processed using binary numbers.
Since the 1-bit (one pin) can only represent two states, the binary calculation becomes 0, 1, 10, 11, 100 ... this form:
So, in the operation of a number, all operands are converted into binary numbers, such as 39, which are transformed into binary 00100111
Binary representation of decimal numbers
As mentioned above, the data in the program will be converted into binary, decimal participation in the operation, will also be converted into binary, such as the decimal 11.1875 will be transformed into 1101.0010.
The range of values represented by the decimal number 4 digits in binary numbers is 0.0000~0.1111, so this can only represent 0.5, 0.25, 0.125, 0.0625 decimal digits and decimals following the decimal point combination (ADD):
As you can see from the table above, the next one in the decimal number 0 is 0.0625, therefore, the decimal number between the 0~0.0625 can not be expressed by the binary number of 4 digits after the decimal point, if the number of digits after the decimal point is increased, the number of decimal digits corresponding to it is increased, but no matter how many digits are added, no 0.1 this result. In fact, the 0.1 conversion to binary is 0.00110011001100110011 ... Note that 0011 is infinitely repetitive:
Console.log (0.2+0.1);
The binary of the operands represents the
0.1 => 0.0001 1001 1001 1001 ... (Infinite loop)
0.2 => 0.0011 0011 0011 0011 ... (Infinite Loop)
The number type of JS is not like C/java, single precision, double precision, and so on, but the uniform performance of the double-precision floating-point type. In accordance with IEEE rules, a single-precision floating-point number is represented as a decimal with 32 digits, and the double-precision floating-point number uses 64 digits to represent the whole decimal place, but floats the numbers by the symbol, the mantissa, the exponent and the base composition, therefore not all digits are used to represent the decimal number, the symbol, the exponent and so on also wants to occupy the digit, the cardinality does not occupy the digit number:
The decimal portion of a double-precision floating-point number supports up to 52 digits, so the two add up to a string of 0.0100110011001100110011001100110011001100 ... Binary digits truncated because of the limit of floating-point numbers, which is then converted to decimal, which is 0.30000000000000004.
Summarize
JS can not correctly handle decimal operations, including other high-level programming languages, this is not the language itself design errors, but the computer itself can not correctly handle decimal operations, the operation of decimals will often get unexpected results, because not all decimal decimals can be represented by the binary.
The above is the entire content of this article, I hope to help you learn.