All numbers in JavaScript, whether integer or decimal, have a type of number. Within a program, the essence of the number type is a 64-bit float, which is consistent with the floating-point numbers in Java, so all the numbers in JavaScript are floating-point numbers. Following the IEEE 754 standard (floating-point arithmetic standard), The range of values that JavaScript can represent is positive or negative 1.7976931348623157 times 10 of 308 times, and the smallest can be expressed as a decimal plus minus 5 times 10 minus 324, which can be accessed by accessing the max_ of the number object, respectively. The Value property and the Min_value property to get.
For integers, according to the requirements of the ECMAScript Standard (http://ecma262-5.com/ELS5_HTML.htm), the integer range that JavaScript can represent and perform exact arithmetic operations is: plus or minus 2 of the 53-time side, The range from the minimum value of 9007199254740992 to the maximum of +9007199254740992; for integers that exceed this range, JavaScript can still operate, but does not guarantee the precision of the results. It is noteworthy that JavaScript supports only 32-bit integer numbers, or integers from 2147483648 to +2147483647, for bitwise operations of integers (such as shifts).
Experiment
Displays the absolute value of the maximum number of JavaScript, the absolute value of the smallest number:
Copy Code code as follows:
Console.log (Number.MAX_VALUE);
Console.log (Number.min_value);
Displays the results as 1.7976931348623157e+308 and 5e-324.
For integers other than the 53-square range of positive and negative 2, JavaScript cannot give accurate results:
Copy Code code as follows:
var a = 9007199254740992;
Console.log (a+3);
The correct result of the operation should be 9007199254740995, but the result of JavaScript's calculation is 9007199254740996. When you try to change the formula, you can see that the error of this calculation will occur frequently as long as the integer is greater than 9007199254740992. If the deviation from the accuracy of the calculation is acceptable, the following example will have more serious consequences:
Copy Code code as follows:
var max_int = 9007199254740992;
for (var i = Max_int I < Max_int + 2; ++i) {
Infinite loop
}
Because of the computational precision problem, the above for statement will fall into a dead loop.
For bitwise operations, JavaScript supports only 32-bit integer numbers:
Copy Code code as follows:
var smallInt = 256;
var bigInt = 2200000000;
Console.log (SMALLINT/2);
Console.log (smallInt >> 1);
Console.log (BIGINT/2);
Console.log (bigInt >> 1);
As you can see, for 32-bit integers (256), JavaScript can perform the correct bit operations, and the results are consistent with the results of the Division (128). For integers other than 32 digits, JavaScript can perform the correct division (1100000000), but the result of bitwise operations is very far from the correct result (-1047483648).