Title: Loss of precision for JavaScript decimal and large integers
Author: Demon
Links: http://demon.tw/copy-paste/javascript-precision.html
Copyright: All articles of this blog are subject to the terms "Attribution-NonCommercial use-share 2.5 mainland China" in the same way.
Let's look at two questions first:
0. 0. 0. 3//False10000000000000000//True
The first problem is the problem of decimal precision, which has been discussed in many blogs in the industry. The second problem, last year, the company has a system of database in the data revision, found that some data duplication of the strange phenomenon. This article will start from the specification, to make a summary of the above problems.
Maximum integer
The numbers in JavaScript are stored with an IEEE 754 double 64-bit floating-point number in the form of:
s x m x 2^e
S is the sign bit, which indicates positive or negative. M is the mantissa, which has the bits. E is exponential and has one bits. The range of e given in the ECMASCRIPT specification is [-1074, 971]. Thus, it is easy to deduce that the maximum number of integers that JavaScript can represent is:
1 x (2^53 - 1) x 2^971 = 1.7976931348623157e+308
The value is Number.MAX_VALUE.
In the same vein, the value of Number.min_value can be deduced as:
1 x 1 x 2^(-1074) = 5e-324
Note that min_value represents a positive number that is closest to 0, not the minimum. The smallest number is-number.max_value.
Decimal precision is missing
Decimal 0.1 binary is 0.0 0011 0011 0011 ... (Loop 0011) decimal 0.2 binary is 0.0011 0011 0011 ... (Loop 0011) 0.1 + 0.2 addition can be expressed as: e =-4; m = 1.10011001100...1100 (52-bit) + E = 3; m = 1.10011001100...1100 (52-bit)------------ --------------------------------- e =-3; m = 0.11001100110...0110 + E =-3; m = 1.10011001100...1100----------------- ---------------------------- e =-3; m = 10.01100110011...001---------------------------------------------= 0.01001100110011...001 = 0.30000000000000004 (decimal)
According to the above calculation, it is also possible to draw a conclusion: when the decimal fractional binary representation of the finite number is not more than 52 bits, in JavaScript can be accurately stored. Like what:
0. 0. 0. True
Further laws, such as:
0.05 + 0.2 == 0.25 //true0.05 + 0.9 == 0.95 //false
It is necessary to consider the rounding modes of IEEE 754, which is of interest to further study.
Loss of precision for large integers
This is a question that has been mentioned very rarely. First you have to figure out what the problem is:
1. What is the largest integer that JavaScript can store?
The question before has been answered, is Number.MAX_VALUE, a very large number.
2. What is the largest integer that JavaScript can store without losing precision?
According s x m x 2^e
to, the sign bit is positive, the 52-bit mantissa fills 1, the exponent e takes the maximum value 971, obviously, the answer is still number.max_value.
What exactly is our problem? Back to the start code:
10000000000000000//True
It is clear that 16 9 is far less than 308 10. This problem has nothing to do with Max_value, but also to belong to the mantissa M only 52 people up.
You can use code to describe:
1//In order to reduce the amount of computation, the initial value can be set a little larger, such as Math.pow (2, 1), andx + +; x = 9007199254740992 is 2^53
That is, when x is less than or equal to 2^53, you can ensure that the accuracy of x is not lost. When x is greater than 2^53, the accuracy of x may be lost. Like what:
x 2^53 + 1 10000000000 ... 001 (in the middle there is 52 0) e = 1< Span style= "color: #000000;" >; m = 10000. 00 (a total of 52 0, where 1 is hidden bit) Obviously, this is the same as 2^53 's storage.
According to the above ideas can be launched, for 2^53 + 2, the second binary is 100000 ... 0010 (51 in the middle 0), can also be accurately stored.
Rule: When x is greater than 2^53 and the number of binary significant digits is greater than 53 bits, there is a loss of precision. This is essentially the same as the loss of fractional precision.
Hidden bit can refer to: A tutorial about Java double type.
Summary
The precision of decimals and large integers is lost, and not only in JavaScript. Strictly speaking, any programming language that uses the IEEE 754 floating-point format to store floating-point types (C/c++/c#/java, and so on) has a loss of precision. In C #, Java, the Decimal, BigDecimal encapsulation class is provided to handle the corresponding processing to avoid the loss of precision.
Note: The ECMASCRIPT specification has a decimal proposal, but it has not yet been formally adopted.
The final test for everyone:
= = Number.MAX_VALUE; = = Number.MAX_VALUE; ... Number.MAX_VALUE + x = = Number.MAX_VALUE; = = Infinity; ... Number.MAX_VALUE + Number.MAX_VALUE = = Infinity; //Question://1. What is the value of x? //2. Infinity-number.max_value = = x + 1; Is it true or false?
Resources
- Wikipedia:float Point
- Es5:the number Type
- Javascript–max_int:number Limits
- Maxinum number
- Questions about the loss of JavaScript computational accuracy
Original link: loss of precision for JavaScript decimal and large integers
Related articles:
- Wrapper object of JavaScript type (Typed Wrappers)
- PHP & javascript:utf-16 to UTF-8
- JavaScript Unicode UTF-8
- JavaScript Memory (memoization)
- Implementing PHP's UrlEncode function with JavaScript
Random article:
- But it's been a long winter.
- Compile PHP under Windows source
- VBS calls IE object to print Web page directly
- Workaround for Windows 7 audio service not running
- Decryption of a VBS prank program
Reproduced Loss of precision for JavaScript decimal and large integers