Floating point Type
Name |
CTS Type |
Description |
Significant figures |
Range (approximate) |
float |
system.single |
32-bit single-precision floating point |
7 |
±1.5x10?45 to±3.4x1038 |
Double |
System.Double |
64-bit Double-precision floating point |
15/16 |
±5.0x10? 324 to±1.7x10308 |
If we write a 12.3 in the code, the compiler will automatically assume that the number is a double type. So if we want to specify a float type of 12.3, then you have to add f/f to the number:
float F = 12.3F;
Decimal type
As a supplement, the decimal type is used to represent high-precision floating-point numbers
name |
cts Type |
description |
|
range (approximate) |
Decimal |
System.Decimal |
128-bit High Precision decimal notation |
28 |
±1.0x10?28 to±7.9x1028 |
As can be seen from the table above, decimal has a large number of significant digits, reaching 28 bits, but the range of data represented is smaller than the float and double types. The decimal type is not the underlying type in C #, so using it will have an impact on the performance of the calculation.
We can define a floating-point number of type decimal as follows:
Decimal d = 12.30M;
Understanding of the error of decimal, float, double
Citations from: http://topic.csdn.net/t/20050514/20/4007155.html ivony comments
It is very dangerous to use floating-point numbers in precise calculations, although C # has taken a number of steps in floating-point arithmetic to make the results of floating-point arithmetic seem quite normal. But in fact, if you do not know the characteristics of floating-point and rushed to use, will cause very serious hidden trouble.
Consider the following statement:
Double dd = 10000000000000000000000d;
DD + = 1;
Console.WriteLine ("{0:G50}", DD);
What is the output? Who knows?
Output is: 1000000000000000000000000
This is the problem of precision loss of floating-point numbers, and most importantly, in the loss of precision, will not report any errors, and there will be no exception generated.
The loss of precision for floating-point numbers can occur in many places, such as D * g/g not necessarily equal to d,d/g * g or not necessarily equal to D.
There are two very dangerous mistakes to know!!
1, decimal is not a floating-point type, decimal does not exist precision loss.
Here are some procedures you can go to see what the results are. Remember! All floating-point variables have the problem of precision loss, and decimal is a no-no floating-point type, no matter how high precision, precision loss still exists!
decimal dd = 10000000000000000000000000000m;
DD + = 0.1m;
Console.WriteLine ("{0:G50}", DD);
2, decimal can store a number larger than double, type conversion from double to decimal does not have any problems.
Microsoft really needs to reflect on the help of Decimal. In fact, only the transformation from the shaping to the decimal is the widening conversion, the decimal precision is larger than the double, but the maximum number that can be stored is smaller than double.
C # float, doble accuracy issues