Floating point Type
name |
cts Type |
description |
significant Figures |
range (approximate) |
float |
system.single |
32-bit single-precision floating point |
7 |
±1.5x10−45 to±3.4x1038 |
Double |
System.Double |
64-bit Double-precision floating point |
15/16 |
±5.0x10−324 to±1.7x10308 |
If we write a 12.3 in the code, the compiler will automatically assume that the number is a double type. So if we want to specify a float type of 12.3, then you have to add f/f to the number:
float F = 12.3F;
Decimal type
As a supplement, the decimal type is used to represent high-precision floating-point numbers
name |
cts Type |
description |
significant Figures |
range (approximate) |
Decimal |
System.Decimal |
128-bit High Precision decimal notation |
28 |
±1.0x10−28 to±7.9x1028 |
As can be seen from the table above, decimal has a large number of significant digits, reaching 28 bits, but the range of data represented is smaller than the float and double types. The decimal type is not the underlying type in C #, so using it will have an impact on the performance of the calculation.
We can define a floating-point number of type decimal as follows:
Decimal d = 12.30M;
Understanding of the error of decimal, float, double
It is very dangerous to use floating-point numbers in precise calculations, although C # has taken a number of steps in floating-point arithmetic to make the results of floating-point arithmetic seem quite normal. But in fact, if you do not know the characteristics of floating-point and rushed to use, will cause very serious hidden trouble. Consider the following statement:
double dd =1"{0:g50}", DD);
What is the output? Who knows?
Output is: 1000000000000000000000000
This is the problem of precision loss of floating-point numbers, and most importantly, in the loss of precision, will not report any errors, and there will be no exception generated. The loss of precision for floating-point numbers can occur in many places, such as D * g/g not necessarily equal to d,d/g * g or not necessarily equal to D.
There are two very dangerous mistakes to know!!
1, decimal is not a floating-point type, decimal does not exist precision loss.
Here are some procedures you can go to see what the results are. Remember! All floating-point variables have the problem of precision loss, and decimal is a no-no floating-point type, no matter how high precision, precision loss still exists!
decimal dd =0.1m"{0:g50}", DD);
2, decimal can store a number larger than double, type conversion from double to decimal does not have any problems.
Microsoft really needs to reflect on the help of Decimal. In fact, only the transformation from the shaping to the decimal is the widening conversion, the decimal precision is larger than the double, but the maximum number that can be stored is smaller than double.
Application Scenarios for Decimal
The decimal type is a 128-bit data type that is suitable for financial and currency calculations. ”
Of course, decimal is safe in most cases, but floating-point numbers are theoretically unsafe.
As for the display problem caused by accuracy error, it is easy to fix. The problem with floating point numbers and the problem of integral type can be avoided:
For example, a transfer from a account to a B account, After calculating the result is 3.788888888888888 yuan, then we deduct from a account so much money, B account increases so much money, but in fact a account does not necessarily deduct accurate value, for example a account of the amount of 100000000000, then this time 100000000000 3.788888888888888 The result of the operation is likely to be 99999999996.211111111111112. And this time the amount of the B account is 0 is likely to add accurate values, such as 3.788888888888888, so that 0.011111111111112 yuan will be gone, the cumulative, the difference will be more and more big.
The double is 64 bits, which is higher than the single-32 bit accuracy. Decimal is a 128-bit high-precision floating-point number that is commonly used in financial operations, is not prone to errors in floating-point calculations, has a higher precision and a smaller range of decimal types, which makes it suitable for financial and monetary calculations.
An example
Just arrived at the office in the morning, was called by the pilot Room call, the original software in the test process found a small problem: the software read out the data than the device LCD display data 0.01 smaller.
How did that happen, data type I already used double type the entire data length also is 6 bits, double type data valid data bit is 7 bits, also enough Ah, do not understand. Then came back under the breakpoint trace.
The front double type in the calculation, is no problem, the data is 66.24, but when I put 66.24 times 100 after the processing result is not right: 66.24*100.0d = 6623.9999 ... 91, the problem is here. Looked up the data for type msdn,double: A Double value type represents a dual 64-bit number with a value between -1.79769313486232e308 and +1.79769313486232e308, and floating-point numbers can only approximate decimal digits. The precision of the floating-point number determines the exact degree of the floating-point approximation to the decimal digit. By default, the precision of a Double value is 15 decimal digits, but the maximum precision for internal maintenance is 17 bits. So there is a 100 after the multiplication, the accuracy is not enough. Since we are processing data, is not allowed to round, so after the unit conversion, the final display of data in the software is 66.23, than the LCD display of 66.24 less than 0.01.
As a result, it was then thought that the decimal type should be used with higher precision.
name |
cts Type |
description |
significant Figures |
range (approximate) |
Decimal |
System.Decimal |
128-bit High Precision decimal notation |
28 |
±1.0x10−28 to±7.9x1028 |
When declaring the decimal type data, you can a:decimal MyData = 100, at which time the compiler implicitly converts the integer number 100 to 100.0m, and of course it can b:decimal MyData = 100.0m, but if it is decimal myData = 100.0d or decimal myData = 100.0f, no, because 100.0d or 100.0f, the compiler is considered a floating-point number, and there is no implicit conversion between floating-point number and decimal type; Therefore, you must use casts to convert between the two types. This is the important, otherwise the compiler will error. Therefore, the general financial software when processing, will use the decimal type.
OK, after switching to the decimal type, it's OK, and the result will be 66.24 of the whole place.
Original link
Misunderstanding of Float,double,decimal in C # (reprint)