Types of floating-point numbers for numeric types:
In our daily life there are not only integers, but also decimals, in C # There are 2 data types to represent (single-precision) float, (double-precision) double.
Their difference is that the value range and accuracy, the computer to the floating point number of the operation speed is much lower than the operation speed of the integer, double type of operation speed is lower than the speed of the operation of the float, if the application of a large number of double-class floating-point numbers, will occupy more memory units, While the computer's processing task will be more onerous, but the result of double type is more accurate than float, so we can adopt the float type when the accuracy requirement is not very high.
Single-precision (float) Type: The value range is between positive and negative 1.5*10^-45 to 3.4*10^38, with a precision of 7 to 8 digits;
Double type: The range of values between positive and negative 5.0*10^-324 to 1.7*10^308 is 15 to 16 digits in precision;
I write a program to differentiate:
Using System; Using System.Collections.Generic; Using System.Linq; Using System.Text; Namespace Example { class program { static void Main (string[] args) {//the same type is defined in the same row, separate float a = 3.0f,b = 10.0f; Note here that when defining float plus f/f should be the default decimal is double type float c = b/a; Divide Double d = 3.0, E = 10.0; Double f = e/d; Console.WriteLine ("Float c={0}\ndouble f={1}", c,f); Output "\ n" for line break }} }
Results comparison:
In the spirit of truth-seeking I counted a bit. The result of float is 8 of which 7 digits are numbers; The result of a double is 16 bits where 15 digits are the number 3;
In a reflective manner, I also reflect on the properties of float and double, the range of float is not between positive and negative 1.5*10^-45 to 3.4*10^38? Can at least represent 38 digits Ah, this is only 7 bits, double can represent at least 300 number Ah, this is 16 bit, I write 10/3 results should have 38 3, or more than 300 3 is right??? Why??? Was it rounded up in the 9th and 17th place? I wrote the following code again:
Using System; Using System.Collections.Generic; Using System.Linq; Using System.Text; Namespace Example { class program { static void Main (string[] args) { float a = 3.333334444333333f; The 8th digit (the decimal point is also counted as one) starts at 4, and the 9th bit is 4 double d = 3333333333333.3455555544; The 16th bit is 4, 17 bits is 5 Console.WriteLine ("Float c={0}\ndouble f={1}", a,d); Output "\ n" for line break }} }
The result is this:
Sure enough, the 8th place in the float is 4 to 0 omitted, the 16th bit of double is originally 4, because 17 bits is 5 so plus 1, followed by 0 omitted.
The above is the C # learning Diary---Data types of floating-point types of content, more relevant content please pay attention to topic.alibabacloud.com (www.php.cn)!