Today, the small part of a situation in the JS with 0.3-0.2 is not 0.1 but 0.09999999999999999999998, is very puzzled, looked at a lot of information to find the original is the following causes:
"Floating-point number" is not "real", the floating-point number has the largest representation range, in the expression range with the nearest real number of floating-point numbers can be expressed, such as
0.1 is a real number, which means 0.10000000 ... is 0.1, and double cannot accurately represent 0.1, but it can represent +0.1000000000000000055511151231257827021181583404541015625 exactly, so it uses + 0.1000000000000000055511151231257827021181583404541015625来 represents 0.1, the same:
0.1 <--> 0.1000000000000000055511151231257827021181583404541015625
0.2 <--> 0.200000000000000011102230246251565404236316680908203125
0.3 <--> 0.299999999999999988897769753748434595763683319091796875
When you calculate with floating-point numbers, the result is also a floating-point number. The computer is not exactly represented, so it is represented by the nearest value.
This article is from the "Htmldom" blog, make sure to keep this source http://sucheng.blog.51cto.com/6511117/1844688
JS floating-point number subtraction occurrence exception