RecentlyProgramThe float and decimal types are different in processing results.
You can write a program to calculate: float a = 2.2; float B = 6.6; float c = B/;
Decimal A = 2.2; decimal B = 6.6; decimal c = B/;
Guess the calculation result and execute the command to check the answer.
This is a very basic problem, but it is lacking in self-knowledge. Please refer to the following brief information:
X. XX. NET is doble by default.
If you want to specify the single-precision float type, you must add F/F
If you want to specify the decimal type, you must add m/m
If you want to specify double type, it is best to add D/d
Float (double) is theoretically insecure, which can reduce precision, while decimal is precise and safe.
It is very dangerous to use floating point numbers in exact calculations, although C # takes many measures in floating point operations to make the results of floating point operations look very normal.
However, if you do not know the characteristics of floating point numbers and use them rashly, it will cause serious risks.