Is the result of LUA calculation, you can see the result is very accurate.
Is the result of Python3 calculation, you can see that the calculation results are inaccurate:
The decimal calculation in JavaScript is not accurate, see this problem
Why are decimal calculations often inaccurate in JavaScript?
The mountain wakes up in this question gives the answer: Because in the computer is the decimal is the binary storage.
The finite non-repeating decimal in decimal, which may be an infinite loop decimal in the binary.
For example:
0.3 (decimal) = 0.0100110011001100 ... (binary)
0.6 (decimal) = 0.1001100110011001 ... (binary)
And the computer precision is limited (16 bits above), the decimal 0.6-0.3 in binary is shown as 0.1001100110011001-0.0100110011001100 (binary) =0.0100110011001101 (binary) = 0.3000030517578125 (decimal)
OK, the error is out.
=========
Question: The numbers in Lua do not have integers and decimals, why is the calculation very accurate? Does LUA need to do this at an extra cost? If so, why would Lua do that? If you do not need another scripting language why not do this (accurate calculation)?
Reply content:
Open it
luaconf.hThe truth is in sight. Your experiment is not due to the more accurate LUA calculations, but rather the output of this number when you find that the decimal point after a period of 0 to discard the back of the content ... For example, you can try the results such as =1.000000000000001 that the LUA decimal calculation is accurate to ask for ASIN (1) and then print out 9.4. Decimal
In most languages, floating-point numbers are designed for scientific calculations. Want to represent a value exactly? please use Int.
As for Lua, it is probably because the problem of not distinguishing between integer and floating-point type display precision is that the language that does not support bignum is bullying. Float is not so calculated. Computer language with a decimal point is stored with a floating point value.
1 floating-point values are composed of three parts, the positive and negative coefficients k, the exponent m and the base N of the decimal point and the K power of (-1) multiplied by (1.N) the power of the M to get the floating point value. So there will be the difference you see.
Lua shows only a few of these, wooden chickens, is not the algorithm to be too much to learn a little bit of editing.
It is not the same way that a computer can store a float type of statistics and store an int type of digital.
Take an int for an example first.
The calculation machine is stored in a two-in-one way, so that any integer can be represented by the symbol + a*2^n+b*2^ (n-1) +...x*2^0. That is to say, the theory is that as long as the storage space is large enough, it can represent all the whole numbers.
But float is different.
Let's start with the way it's stored, which is quite right.
Float type is generally available in float (single-precision floating point) and double (dual-precision floating point) two storage methods, the region is float with 4 byte is 32 bits and double with 64 bits (it does not know that is not all of the calculation machine, I'm still a little white now. Each bit can be expressed in 0 or 1.
Float is the symbol bit + 8bit point digit + 23bits decimal place to represent
The double is the symbol bit + 11bits digit + 52bits decimal point to represent
And that's the first question, if your numbers are too small to be used to represent bits in a decimal place, there's a loss of numbers (I mean a situation that can be expressed in the form below), This is usually the time when the computer round to a certain amount of evidence that is very close to the original (I remember round to even?? Not very clear) so the expression is not correct.
The second problem, the decimal notation is 1+a*2^-1 + b*2^-2 + .... The result is that it continually uses a small number to measure the original, but in addition to the fact that it conforms to this form of presentation, the rest of the numbers are not properly used in this way, for example, 0.3-floor master can try 0.25 + 0.25 and 0.2 + 0.3 It's different when you have the accuracy to be precise. This is why the float type generally does not have to be "= =" to make a definite range, because there is hazard. This kind of thing, we're going to do experiments with Mathematica.