It's been weird. Why is a decimal added to the predefined data types of C #, and is it enough to have float and double? Come and dig a hole today.

Floating-point type

Name |
CTS Type |
De script ion |
Significant figures |
Range (approximate) |

Float |
System.Single |
32-bit Single-precision floating point |
7 |
±1.5x10?45 to±3.4x1038 |

Double |
System.Double |
64-bit Double-precision floating point |
15/16 |
±5.0x10 324 to±1.7x10308. |

If we write a 12.3 in the code, the compiler automatically thinks that the number is a double. So if we want to specify 12.3 as float, then you have to add f/f after the number:

float F = 12.3F;

**Decimal Type**

As a supplement, the**decimal type** is used to represent high precision floating-point numbers

Name |
CTS Type |
De script ion |
Significant figures |
Range (approximate) |

Decimal |
System.Decimal |
128-bit High Precision decimal notation |
28 |
±1.0x10?28 to±7.9x1028 |

As you can see from the table above, decimal has a large number of significant digits, reaching 28 digits, but the range of data represented is smaller than the float and double types. The **decimal type** is not the underlying type in C #, so it can be used to affect performance at the time of calculation.

We can define a **decimal type** floating-point number in a way that looks like this:

Decimal d = 12.30M;

The understanding of the errors of decimal, float and double

It is dangerous to use floating-point numbers in exact calculations, although C # takes a number of steps in floating-point operations to make the results of floating-point operations seem normal. But in fact, if you do not know the characteristics of floating point and use it rashly, it will cause very serious problems.

Consider the following statement:

Double dd = 10000000000000000000000d;

DD + 1;

Console.WriteLine ("{0:G50}", DD);

What the output is. Who knows.

Output is: 1000000000000000000000000

This is the problem of loss of precision floating point, the most important thing is that, in the loss of precision, will not report any errors, there will be no abnormal production.

The precision loss of floating-point numbers can occur in many places, such as D * g/g not necessarily equal to d,d/g * g nor necessarily equal to D.

There are two other very dangerous misconceptions.

1. Decimal is not a floating-point type, there is no precision loss in decimal.

Here's a procedure where you can go and see what the results are. Remember. All floating-point variables have a problem of precision loss, and decimal is a literal floating-point type, no matter how high the precision, the loss of precision still exists.

decimal dd = 10000000000000000000000000000m;

DD + 0.1m;

Console.WriteLine ("{0:G50}", DD);

2. Decimal can store more than double, and there is no problem with type conversions from double to decimal.

Microsoft really needs to reflect on the help of Decimal. In fact, only the transformation from plastic to decimal is the widening conversion, the precision of decimal is larger than double, but the maximum number of stores can be smaller than double.

The decimal type is a 128-bit data type that is appropriate for financial and currency calculations. ”

Of course, decimal is safe in most cases, but floating-point numbers are theoretically unsafe.

As for precision error caused by the display problem, it is easy to repair. The problem with floating point numbers and the problem that an integral type can avoid is one:

For example, transfer from a account to B account, After calculating the result is 3.788888888888888 yuan, then we deducted from a account so much money, B account increase so much money, but the fact that a account does not necessarily deduct the exact value, such as the amount of a account in 100000000000, then this time 100000000000 -3.788888888888888 The result is likely to be 99999999996.211111111111112. And this time B account of the amount of 0 is very likely to add accurate values, such as 3.788888888888888, so that 0.011111111111112 yuan will disappear, over time, the difference will be more and more big.

Double is 64-bit, higher than single-32 bit precision

decimal128 bit high precision floating-point number, often used in financial operations, does not appear floating point calculation error

, the decimal type has higher precision and a smaller range, which makes it appropriate for financial and monetary calculations.

Just arrived at the office in the morning, was called by the pilot Room, the original software in the test process found a small problem: the software read out the data than the device on the LCD display data 0.01 smaller.

How did this happen, data type I've used a double type. The entire data length is 6 digits, double data valid data bit is 7 bits, enough, I don't understand. Then came back to the breakpoint tracking.

Before the double type in the calculation, is no problem, the data is 66.24, but when I multiply 66.24 by 100 after the processing result is wrong: 66.24*100.0d = 6623.9999 ... 91, the problem is here. Msdn,double data: The Double value type represents a double-precision 64-digit number between -1.79769313486232e308 and +1.79769313486232e308, and the floating-point number can only approximate decimal digits. The precision of the floating-point number determines the exact degree of floating-point numbers approximate to the decimal digits. By default, the double value has a precision of 15 decimal digits, but the maximum precision for internal maintenance is 17 bits. So there is a multiplication of 100, the precision is not enough. And because we are not allowed to round the data, so, after the unit conversion, the final data displayed in the software is 66.23, compared to the LCD display of 66.24 small 0.01.

Therefore, it follows that the decimal type should be used with higher precision.

Type |
Approximate range |
Precision |
. NET Framework Types |

Decimal |
±1.0x10e?28 to ±7.9x10e28 |
28 to 29-bit valid bit |
System.Decimal |

When declaring **decimal type** data, you can a:decimal MyData = 100, at which point the compiler implicitly converts the integer number 100 to 100.0m, and of course you can b:decimal MyData = 100.0m, but if it is decimal MyData = 100.0d or decimal myData = 100.0f, no, because 100.0d or 100.0f, the compiler considers a floating-point number, and there is no implicit conversion between the floating-point number and the decimal type; You must use a cast to convert between the two types. This is the important, otherwise the compiler will complain. So the general financial software in the processing, will use the decimal type.

OK, then, after the decimal type, OK, the result will be complete full display of 66.24.