It is the result of Lua's computation. We can see that the computation result is very accurate. It is the calculation result of Python3. We can see that the calculation result is inaccurate: the decimal computation in JavaScript is also inaccurate. Why is the decimal computation result often inaccurate in JavaScript? Shanzhi gave the answer to this question: in the computer, decimal is stored in binary. A finite number of non-repeating decimal places in decimal places can be infinitely repeating decimal places in binary. For example: 0.3 (decimal) 0.0100110011001
It is the result of Lua's computation. We can see that the computation result is very accurate.
Is the calculation result of Python3. The calculation result is inaccurate:
The decimal number calculation in JavaScript is also inaccurate. For details, refer to this question.
Why is the decimal calculation result often inaccurate in JavaScript?
Shanzhi gave the answer to this question: in the computer, decimal is stored in binary.
A finite number of non-repeating decimal places in decimal places can be infinitely repeating decimal places in binary.
For example:
0.3 (decimal) = 0.0100110011001100 ...... (Binary)
0.6 (decimal) = 0.1001100110011001 ...... (Binary)
The computer precision is limited (16 digits are written above), and the decimal 0.6-0.3 represents 0.1001100110011001-0.0100110011001100 (Binary) = 0.0100110011001101 (Binary) = 0.3000030517578125 (decimal) in binary)
Okay, the error is returned.
==========
Q: The numbers in Lua are not classified as integers and decimals. Why is the calculation result accurate? Does Lua require additional costs? If necessary, why should Lua do this? If no other scripting language is required, why not do this (accurate calculation )? Reply content:
Open
Luaconf. hThe truth is in sight. Your experiment is not due to more accurate lua computing, but to output this number and find that after the decimal point, the subsequent content is discarded for a consecutive period of 0 ...... for example, you can try the result of = 1.000000000000001 to obtain asin (1) for lua decimal computation accuracy and print 9.4. decimal
In most languages, floating point numbers are designed for scientific computing. Want to accurately represent a value? Use int.
As for lua, it is probably because the Display Accuracy of integer and floating point types is not distinguished. The bignum language is not supported. Float is not calculated in this way. The computer language stores decimal points with floating point values.
A floating point value consists of three parts: Positive and Negative Coefficient k, Exponent m, base number n after the decimal point, and (-1) to get the floating point value. That's why you see the difference.
Lua shows that there are only a few digits. It is unknown whether the algorithm has been reconstructed.
When a computer stores float-type data, the method for storing int-type data is different.
Take int for example.
The computer is stored in a binary method. So far, it means an integer, you can use the delimiter + a * 2 ^ n + B * 2 ^ (n-1) +... x * 2 ^ 0. That is to say, in theory, as long as there is enough space for memory to be large, it can represent all the integers.
But float is different.
Let's talk about its memory storage method first. In fact, it is quite correct.
Float data types currently have two memory storage modes: float (single precision floating point number) and double (precise floating point number, in other regions, float uses 4 bytes, that is, 32 bits, and double uses 64 bits. (In fact, I don't know if all the computing machines are like this, I am still very white) Every bit can be expressed with 0 or 1.
Float is represented by the delimiter plus 8 bits and 23 bits.
Double is represented by the delimiter + 11bits + 52 bits
This is the first problem. If a small number in your data exceeds the limit, it can be used to indicate the bits length of a small number, there will be data loss (I mean the situation that can be expressed by the following table partition method ), at this time, the computer will usually follow certain rules to automatically round to a data volume that can be represented by a very close original number (I remember it as a round even ?? I do not know clearly.
The second question is: 1 + a * 2 ^-1 + B * 2 ^-2 + ....... the result is that it will use a small data volume to measure the original data volume, but in addition to its own compliance with the data volume of this table method, other data types cannot use this method to perform Table Partitioning. For example, 0.3 primary nodes can merge 0.25 + 0.25 and 0.2 + 0.3, which are different when precision is obtained. This is also the reason why float data generally does not use "=" to compare data, but to determine a regression because of hazard. Mathematica should be used for such an experiment.