Difference between decimal, float, and double

Source: Internet
Author: User

It's always strange why a decimal is added to the pre-defined data type of C #. Isn't float and double enough? I am going to dig it out today.

Floating Point Type

 

 

Name

 

CTS Type

 

De script ion

 

Significant Figures

 

Range (approximate)

 

Float

 

System. Single

 

32-bit single-precision floating point

 

7

 

± 1. 5 × 10? 45 to ± 3. 4 × 1038

 

Double

 

System. Double

 

64-bit double-precision floating point

 

15/16

 

± 5. 0 × 10? 324 to ± 1. 7 × 10308

 

If we write 12.3 in the Code, the compiler will automatically consider this number as a double type. So if we want to specify 12.3 As the float type, you must add F/f after the number:

Float f = 12.3F;

Decimal type

In addition, the decimal type is used to represent high-precision floating point numbers.

 

 

Name

 

CTS Type

 

De script ion

 

Significant Figures

 

Range (approximate)

 

Decimal

 

System. Decimal

 

128-bit high precision decimal notation

 

28

 

± 1. 0 × 10? 28 to ± 7. 9 × 1028

 

From the table above, we can see that the decimal type has a large number of valid digits, reaching 28 BITs, but the data range is smaller than the float and double types. The decimal type is not the basic type in C #. Therefore, it may affect the computing performance.

We can define a floating point number of the decimal type as follows:

Decimal d = 12.30 M;

Knowledge of decimal, float, and double errors

 

It is very dangerous to use floating point numbers in exact calculations, although C # takes many measures in floating point operations to make the results of floating point operations look very normal. However, if you do not know the characteristics of floating point numbers and use them rashly, it will cause serious risks.

Consider the following statement:

Double dd = 0000000000000000000000d;

Dd + = 1;

Console. WriteLine ("{0: G50}", dd );

What is output? Who knows?

Output: 1000000000000000000000000

This is the problem of floating point Precision loss. The most important thing is that when the precision loss occurs, no errors will be reported or any exceptions will occur.

The loss of floating point precision may occur in many places. For example, d * g/g is not necessarily equal to d, and d/g * g is not necessarily equal to d.

There are two very dangerous mistakes !!

1. decimal is not a floating point type or decimal type, and there is no loss of precision.

The following is a program for you to see what the result is. Remember! All floating-point variables have the problem of loss of precision, while decimal is a full floating-point type. No matter how high the precision is, the loss of precision still exists!

Decimal dd = 10000000000000000000000000000 m;

Dd + = 0.1 m;

Console. WriteLine ("{0: G50}", dd );

2. the number that decimal can store is larger than that of double. The conversion from double to decimal does not cause any problems.

Microsoft has to reflect on the help of decimal. In fact, only the transformation from an integer to decimal is an extended conversion. The precision of decimal is greater than that of double, but the maximum number that can be stored is smaller than that of double.

 

 

 

"The decimal type is a 128-bit data type suitable for financial and currency computing ."

Of course, decimal is safe in most cases, but floating point is theoretically insecure.

As for the display problems caused by precision errors, it is easy to fix. Floating Point Numbers can cause problems and integer types can avoid the following problems:

For example, if the result of transferring money from account A to account B is 3.788888888888888 yuan, then we deduct so much money from account A and Account B adds so much money, but in fact, an account A may not necessarily deduct accurate values. For example, if the amount of an account is 100000000000, the result of the 100000000000-3.788888888888888 operation may be 99999999996.211111111111112. At this time, if the amount of B account is 0, it is very likely to add an accurate value, such as 3.788888888888888. As a result, 0.011111111111112 yuan will disappear, and the difference will grow bigger and bigger over time.

Double is 64-bit, which is more precise than single-32-bit.

Decimal128-bit high-precision floating point number, which is often used in financial operations and does not contain any floating point calculation error.

The decimal type has a higher accuracy and a smaller range, which makes it suitable for financial and monetary computation.

 

 

 

When I arrived at the office in the morning, I was called by the pilot room. The software found a small problem during the test: The data read by the software is 0.01 less than the data displayed on the LCD.

How can this happen? I have already used the double type, and the length of the entire data is 6 bits, and the valid data bits of the double type are 7 bits, which is enough. I don't understand. So I came back to the next breakpoint tracking.

When the double type is in the front, it is okay, and the data is 66.24, but when I multiply 66.24 to 100, the processing result is incorrect: 66.24 * 1001_d = 6623.9999... 91. The problem lies here. According to msdn, Double data: Double indicates a 64-bit Double-precision number between-1.79769313486232e308 and + 1.79769313486232e308. Floating Point Numbers can only be similar to decimal numbers, the precision of a floating point determines the accuracy of a floating point number, which is similar to a decimal number. By default, the precision of the Double value is 15 decimal digits, but the internal maximum precision is 17 digits. So after we multiply one hundred, the precision is not enough. Since data processing is not allowed to be rounded off, after the unit conversion, the final data displayed in the software is 66.23, Which is 66.24 less than the 0.01 displayed on the LCD.

Therefore, the decimal type with higher precision should be used.

 

 

Type

 

Approximate range

 

Precision

 

. NET Framework type

 

Decimal

 

± 1. 0 × 10e? 28 to ± 7. 9 × 10e28

 

28-29 effective bits

 

System. Decimal

 

When declaring decimal data, s can use a: decimal myData = 100. In this case, the compiler implicitly converts the integer number 100 to 100.0 m. Of course, B: decimal myData = 100.0 m, however, if it is decimal myData = 1001_d or decimal myData = 1001_f, it will not work, because 1001_d or 1001_f, the compiler considers it a floating point, and there is no implicit conversion between the floating point and decimal types; therefore, forced conversions must be used between the two types. This is the important, otherwise the compiler reports an error. Therefore, the decimal type is used in general financial software processing.

Now, use the decimal type, and the result is displayed as 66.24.

 

From http://lj.soft.blog.163.com/blog/static/79402481201032210173381/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.