Talk about C # Math class , Math, floating point
The C # language supports the numeric types shown , which are integers , floating-point numbers, and decimals , respectively
It may not be clear , But a closer look can still be clear .
In aC #in the program,integer(no number of decimal points)is considered to be ainttype(unless its value is greater than the maximumintvalue),based on data values,This data is considered in turn asUint,ling,ulong,a number with a decimal point is considered to be aDoublevalue.This is to say(1.0). GetType () ==typeof (double).
The letter ' U ' in front of the type stands for unsigned meaning . because no symbols , so u represents whether the data is an integer or 0, It can't be negative . .
Check for integer Overflow
Consider the following code :
Short s=32767;s+=1;
UShort Us=0;us-=1;
In the first case , a signed number plus 1, through its maximum value , due to the reason that The integer is stored in memory, The result is -32768.
In the second case , An unsigned number is reduced to less than 0, and the result will be 65535.
The two examples are the overflow and overflow situations, and if you want to avoid this , You can use the checked keyword :
Short s=32767;
Checked
{
s+=1;// will result in an overflow
}
or use the following compiler switch:
/checked+
The default compiler switches correspond to :
/checked-
So far, all integers at runtime are checked for overflow , and by default the compiler marks the compile-time up and down overflow as an error . It has nothing to do with your compiler switch .
Example Statements : Short s=32767+1; this produces a compile-time error , because the addition is evaluated during compilation .
For example, the following case ;
const int i1=65536;
const int i2=65536;
int i3=i1*i2;
Because I1,i2 are const values , the compiler tries to evaluate i3=i1*i2 when compiling and encounters an upward overflow . resulting in compilation errors .
The compiler switch does not overwrite this behavior , but the unchecked keyword can override this behavior .
int i3=unchecked (I1*I2) can be compiled normally .
Decimal type
The decimal keyword represents the data type of the byte . compared with floating-point type , Decimal type has higher precision and a smaller range , this makes it suitable for financial and post-ratio calculations . Decimal the approximate range and accuracy of the type is shown above .
It uses -bytes( +bit)Store each value.bits are divided into thebit integer digits,a symbol bit,A set of one can be in0-28the scale factor between changes.in math.,This scale factor is aTenthe negative exponential power of,number indicating the decimal point of a numeric value.
For example , if a decimal definition equals 12.34, then this number is stored as an integer 0x4d2 ( after 1234 ), and a scale factor of 2.
As long as a decimal has(or less than) .a valid number and(or less than) .a decimal position, Decimaldata type to store it accurately.for floating-point numbers,it's not tenable..\!If you define afloatvalue equals12.34,then it will be stored as0XC570A4 (or12939428)divided by0x100000 (or1048576),This value is equal to12.340000152587890625about equal to12.34,even if you put aDoublevalue is defined as12.34,He was also an approximately equal to12.34the number.
This is where you love not to let the cents mysteriously appear and disappear. The reason why you should use decimal when performing calculations , because the floating-point numbers are imprecise , Floating-point data types are suitable for scientific and engineering applications , but not in financial applications .
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Talk about C # math, math, floating point (UP)