Var sum = 0;

For (var I = 0; I <10; I ++ ){

Sum ++ = 0.1;

}

Console. log (sum );

Will the above program output 1?

Among the 25 JavaScript interview questions that you need to know, the 8th questions briefly explain why JavaScript cannot correctly handle decimal operations. Today, I picked up my old question and analyzed it at a deeper level.

However, it should be noted that the failure to correctly handle decimal operations is not a design error of the JavaScript language itself. Other advanced programming languages, such as C and Java, the decimal operation cannot be correctly processed:

# Include <stdio. h>

Void main (){

Float sum;

Int I;

Sum = 0;

For (I = 0; I <100; I ++ ){

Sum ++ = 0.1;

}

Printf ('% f \ n', sum); // 10.000002

}

Representation of numbers in a computer

As we all know, programs written in advanced programming languages must be interpreted, compiled, and converted into machine languages that can be recognized by the CPU (Central Processing Unit, it does not recognize decimal, octal, and hexadecimal numbers of numbers. The hexadecimal numbers we declare in the program will be converted into binary numbers for calculation.

Why not convert it to a three-digit number?

A computer is composed of many electronic components, such as Integrated circuits, which look like this:

The IC has many shapes, and many pins are arranged side by side or inside it (the figure shows only one side ). All the pins of the IC are in either 0 V or 5 V status, that is, one IC pin can only represent two states. This feature of IC determines that data in the computer can only be processed in binary.

Because one-bit (one pin) can only represent two states, the binary calculation method is 0, 1, 10, 11, 100 .... This form:

Therefore, in a number operation, all operands are converted to binary numbers for calculation. For example, 39 is converted to binary 00100111.

Binary representation of decimal places

As mentioned above, the data in the program will be converted into binary numbers. When decimal places are involved in the operation, they will also be converted into binary values, for example, decimal 11.1875 will be converted to 1101.0010.

The value range of the four digits after the decimal point in binary format is 0.0000 ~ 0.1111. Therefore, it can only represent the decimal digits of 0.5, 0.25, 0.125, 0.0625, and, and the digits after the decimal point combination (sum:

Decimal number corresponding to the binary number

0.0000 0

0.0001 0.0625

0.0010 0.125

0.0011 0.1875

0.0100 0.25

0.1000 0.5

0.1001 0.5625

0.1010 0.625

0.1011 0.6875

0.1111 0.9375

From the table above, we can see that the next digit of the decimal number 0 is 0.0625, so, 0 ~ The decimal point between 0.0625 cannot be expressed by the binary number of the four digits after the decimal point. If the number of digits after the decimal point is increased, the corresponding decimal number will also increase, however, no matter how many digits are added, you cannot get the 0.1 result. In fact, the conversion from 0.1 to binary is 0.00110011001100110011 ...... Note that 0011 is infinite repetition:

Console. log (0.2 + 0.1 );

// Binary representation of the operand

0.1 => 0.0001 1001 1001 1001... (Infinite loop)

0.2 => 0.0011 0011 0011 0011... (Infinite loop)

The Number type of js is not classified into integer type, single precision, double precision, etc., but unified as double precision floating point type. According to IEEE, a single-precision floating point number uses 32 digits to represent all decimals, while a double-precision floating point uses 64 digits to represent all decimals. A floating point number consists of symbols, tails, indexes, and base numbers, therefore, not all digits are used to represent decimal places. Symbols and indexes also occupy digits. The base does not occupy digits:

The decimal part of a double-precision floating-point number supports a maximum of 52 digits. Therefore, after the two are added, we can get such a string 0.0100110011001100110011001100110011001100... The number of binary digits truncated due to the decimal scale limit of the floating point number. At this time, convert it to decimal, and then it becomes 0.30000000000000004.

An interesting test was conducted today: 0.1 + 0.2 = 0.3 // false

Suddenly depressed, okay! 0.1 + 0.2 changed to: 0.30000000000000004

Another 2.4/0.8 => 2.9999999999999996 cannot be converted into an integer (2.4*100)/(0.8*100)

10.22 now I want to subtract the result value of 0.11 and then there will be a lot of decimal places 10.110000000000001. Then I used the toFixed method to filter decimal places, but I don't know which efficiency is high to convert it to an integer! Let's try again!

Var date1 = new Date ();

For (var I = 0; I <10000; I ++ ){

Var result1 = (10.22-0.11). toFixed (2 );

}

Alert (new Date ()-date1); // Low efficiency

Var date2 = new Date ();

For (var j = 0; j <10000; j ++ ){

Var result2 = (10.22*1000-0.11*1000)/1000;

}

Alert (new Date ()-date2); // high efficiency

Alert (0.1 + 0.2 = 0.3); // since false is returned

Alert (0.1 + 0.2); // since 0.30000000000000004 is returned

Alert (parseFloat (0.1) + parseFloat (0.2); // still returns 0.30000000000000004

I checked some information. One is the Bug of JavaScript floating-point number calculation, and the other is related to the computer's final conversion to binary calculation. But why not all decimals will have this problem? I am not sure yet, I have time to study it in depth.

Solution:

To solve this problem, the first method is to use the toFixed (n) method of JavaScript to directly obtain N decimal places. However, I think this method has some problems in data precision. If the data accuracy requirement is not high, you can use it.

Alert (0.1 + 0.2). toFixed (1 ));

The second method is to write the calculation method by yourself. The following is a user-defined addition function. Using this method for addition will avoid the above problem.

// Custom addition operation

Function addNum (num1, num2 ){

Var sq1, sq2, m;

Try {

Sq1 = num1.toString (). split (".") [1]. length;

}

Catch (e ){

Sq1 = 0;

}

Try {

Sq2 = num2.toString (). split (".") [1]. length;

}

Catch (e ){

Sq2 = 0;

}

M = Math. pow (10, Math. max (sq1, sq2 ));

Return (num1 * m + num2 * m)/m;

}

Alert (addnum( 0.1, 0.2 ));

Of course, you can also write it as follows: alert (num * 3 + 10*3)/3). N decimal places are not displayed.

Alert (num * 3 + 10*3)/3); and alert (num + 10); there is a difference between the two programming languages: converting computers to binary operations at the underlying layer, maybe this is the reason for the above problem, and it is still to be studied by our programmers or programming beginners.

First, write a demo to reproduce the problem. I am using a js Online test environment [Open].

Rewrite the displaynum () function

Function displaynum ()

{

Var num = 22.77;

Alert (num + 10 );

}

Click Show. The result shows that N decimal places appear in 32.769999999996.

Not all numbers will have this phenomenon. Except 22.99 and 2.777, it seems that these numbers are not special.

I checked some information. One is the bug of JS floating point number calculation, and the other is related to the computer's final conversion to binary calculation. But why not all decimal places will be like this? I do not know yet, I have time to study it in depth.

There are two solutions: the first one is to use JS. the toFixed (n) method directly obtains n decimal places. I think this method has some problems in data precision. if the data accuracy requirement is not high, you can use it. the second method is to write the js calculation method by yourself.

The following is a user-defined addition function. Using this method for addition will avoid the above problem.

Function addNum (num1, num2 ){

Var sq1, sq2, m;

Try {sq1 = num1.toString (). split (".") [1]. length;} catch (e) {sq1 = 0 ;}

Try {sq2 = num2.toString (). split (".") [1]. length;} catch (e) {sq2 = 0 ;}

M = Math. pow (10, Math. max (sq1, sq2 ));

Return (num1 * m + num2 * m)/m;

}

Of course, you can also write alert (num * 3 + 10*3)/3) in a simple way. In this way, n decimal places are not displayed.

Alert (num * 3 + 10*3)/3); and alert (num + 10); there is a difference between the two programming languages: converting computers to binary operations at the underlying layer, maybe this is the reason for the above problem, and it will be further studied by me.

The experts in the garden know this. Please also enlighten O (& cap; _ & cap;) O ~~!

Summary

JavaScript cannot correctly handle decimal operations, like other advanced programming languages. This is not a language design error, but a computer itself cannot correctly handle decimal operations, the decimal operation often produces unexpected results, because not all decimal places can be expressed in binary.