The following describes the precision of float operations in javascript. Someone asked a js question:
The Code is as follows:
Var I = 0.07;
Var r = I * 100;
Alert (r );
Why is the result 7.0000000000000001?
After checking the information, we know that in JavsScript, variables are stored by float instead of the number and float types. While javascript uses the IEEE 754-2008 standard defined 64-bit floating point format storage number, according to the definition of IEEE 754: http://en.wikipedia.org/wiki/IEEE_754-2008
Decimal64 corresponds to an integer with a length of 10 and a decimal part with a length of 16. Therefore, the default calculation result is "7.0000000000000001". If the last decimal point is 0, 1 is used as a valid digit.
Similarly, we can imagine that the result of 1/3 is 0.3333333333333333.
So how can we correct this value?
You can use the following methods:
I. parseInt
Var r4 = parseInt (I * 100 );
Ii. Math. round
Var r2 = Math. round (I * 100) * 1000)/1000;
You can obtain
Appendix all test code:
The Code is as follows:
Test script