The difference between decimal and double,float in C #

Source: Internet
Author: User

Transferred from: http://www.cnblogs.com/lovewife/articles/2466543.html

Single precision refers to a 4-byte floating-point number, which is the float
A double is a 8-byte floating-point number, which is double

Decimal is a high precision

Floating point Type

Name

CTS Type

Description

Significant figures

Range (approximate)

 

 ,

System.Single

 

32-bit single-precision floating point

 

7

 

±1.5x10?45 to±3.4x1038

 

double

 ,

system.double

 

64-bit double-precision floating point

 ,

15/16

 

±5.0x10? 324 to±1.7x10308

If we write a 12.3 in the code, the compiler will automatically assume that the number is a double type . So if we want to specify a float type of 12.3, then you have to add f/f to the number:

float f = 12.3F;

Decimal type

As a supplement, the decimal type is used to represent high-precision floating-point numbers

Name

CTS Type

Description

Significant figures

Range (approximate)

Decimal

System.Decimal

128-bit High precision decimal notation

28

±1.0x10?28 to±7.9x1028

As can be seen from the table above,decimal has a large number of significant digits, reaching 28 bits, but the range of data represented is smaller than the float and double types . The decimal type is not the underlying type in C #, so using it will have an impact on the performance of the calculation.

We can define a floating-point number of type decimal as follows:

decimal d = 12.30M;

Understanding of the error of decimal, float, double

It is very dangerous to use floating-point numbers in precise calculations , although C # has taken a number of steps in floating-point arithmetic to make the results of floating-point arithmetic seem quite normal. But in fact, if you do not know the characteristics of floating-point and rushed to use, will cause very serious hidden trouble.

Consider the following statement:

Double dd = 10000000000000000000000d;

DD + = 1;

Console.WriteLine ("{0:G50}", DD);

What is the output? Who knows?

Output is: 1000000000000000000000000

This is the problem of precision loss of floating-point numbers, and most importantly, in the loss of precision, will not report any errors, and there will be no exception generated.

The loss of precision for floating-point numbers can occur in many places, such as D * g/g not necessarily equal to d,d/g * g or not necessarily equal to D.

There are two very dangerous mistakes to know!!

1, decimal is not a floating-point type, decimal does not exist precision loss.

Here are some procedures you can go to see what the results are. Remember! All floating-point variables have the problem of precision loss, and decimal is a no-no floating-point type, no matter how high precision, precision loss still exists!

decimal dd = 10000000000000000000000000000m;

DD + = 0.1m;

Console.WriteLine ("{0:G50}", DD);

2, decimal can store a number larger than double, type conversion from double to decimal does not have any problems.

Microsoft really needs to reflect on the help of Decimal. In fact, only the transformation from the shaping to the decimal is the widening conversion,the decimal precision is larger than the double, but the maximum number that can be stored is smaller than double .

Thedecimal type is a 128-bit data type that is suitable for financial and currency calculations . ”

Of course,decimal is safe in most cases, but floating-point numbers are theoretically unsafe.

As for the display problem caused by accuracy error, it is easy to fix. The problem with floating point numbers and the problem of integral type can be avoided:

For example, a transfer from a account to a B account,    After calculating the result is 3.788888888888888 yuan, then we deduct from a account so much money, B account increases so much money, but in fact a account does not necessarily deduct accurate value, for example a account of the amount of 100000000000, then this time 100000000000 3.788888888888888 The result of the operation is likely to be 99999999996.211111111111112. And this time the amount of the B account is 0 is likely to add accurate values, such as 3.788888888888888, so that 0.011111111111112 yuan will be gone, the cumulative, the difference will be more and more big.

Double is 64-bit, higher than single-32 bit accuracy

decimal128-bit high-precision floating-point number, often used in financial operations, does not appear the error of floating-point calculation

, the decimal type has a higher precision and a smaller range, which makes it suitable for financial and monetary calculations.

Just arrived at the office in the morning, was called by the pilot Room call, the original software in the test process found a small problem: the software read out the data than the device LCD display data 0.01 smaller.

How did that happen, data type I already used double type the entire data length also is 6 bits, double type data valid data bit is 7 bits, also enough Ah, do not understand. Then came back under the breakpoint trace.

The front double type in the calculation, is no problem, the data is 66.24, but when I put 66.24 times 100 after the processing result is not right: 66.24*100.0d = 6623.9999 ... 91, the problem is here. Looked up the data for type msdn,double: A Double value type represents a dual 64-bit number with a value between -1.79769313486232e308 and +1.79769313486232e308, and floating-point numbers can only approximate decimal digits. The precision of the floating-point number determines the exact degree of the floating-point approximation to the decimal digit. By default, the precision of a Double value is 15 decimal digits, but the maximum precision for internal maintenance is 17 bits. So there is a 100 after the multiplication, the accuracy is not enough. Since we are processing data, is not allowed to round, so after the unit conversion, the final display of data in the software is 66.23, than the LCD display of 66.24 less than 0.01.

As a result, it was then thought that the decimal type should be used with higher precision.

Type

Approximate range

Precision

. NET Framework Types

Decimal

±1.0x10e?28 to ±7.9x10e28

28 to 29-bit significant bit

System.Decimal

When declaring the decimal type data, you can a:decimal MyData = 100, at which time the compiler implicitly converts the integer number 100 to 100.0m, and of course it can b:decimal MyData = 100.0m, but if it is decimal myData = 100.0d or decimal myData = 100.0f, no, because 100.0d or 100.0f, the compiler is considered a floating-point number, and there is no implicit conversion between floating-point number and decimal type; Therefore, you must use casts to convert between the two types. This is the important, otherwise the compiler will error. Therefore , the general financial software when processing, will use the decimal type .

OK, after switching to the decimal type, it's OK, and the result will be 66.24 of the whole place.

The difference between decimal and double,float in C #

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.