C # Learning diary 06 --- floating point data type
The floating point type of the Value Type:
In our daily life, there are not only integers, but also decimals. in C #, there are two types of small and medium numbers to represent (single precision) float and (double Precision) double.
Their difference is that the value range and accuracy are different. The computing speed of a computer on a floating point is much lower than that on an integer, if a large number of double-precision floating point numbers are used in the program, more memory units will be occupied, and the processing tasks of the computer will be heavier, however, the result of the double type is more accurate than the float type, so we can use the float type if the precision is not very high.
Float Type: The value ranges from positive to negative 1.5*10 ^-45 to 3.4*10 ^ 38. The precision is 7 to 8 digits;
Double type: the value range is between positive and negative 5.0*10 ^-324 to 1.7*10 ^ 308. The precision is 15 to 16 digits;
I will write a program to differentiate:
Using System; using System. collections. generic; using System. linq; using System. text; namespace Example {class Program {static void Main (string [] args) {// used for defining the same type in the same row, separated by float a = 3.0f, B = 10.0f; // note that f/F should be added when float is defined here. The default decimal point is double type float c = B/a; // remove double d = 3.0, e = 10.0; double f = e/d; Console. writeLine (float c = {0} double f = {1}, c, f); // output indicates line feed }}}
Result comparison:
In a realistic spirit, I carefully counted the results of float. There were 8 digits, 7 of which were digits. The results of double were 16 digits, 15 of which were digits 3;
In a reflective attitude, I reflected on the attributes of float and double. Isn't the float range between plus and minus 1.5*10 ^-45 to 3.4*10 ^ 38? It can represent at least 38 digits. This is only 7 digits, and double can represent at least 300 multi-digit characters. This is only 16 digits. I wrote 38 3 results for 10/3, or more than 300 3, right ??? Why ??? Is it because the number of digits between 9th and 17th is rounded down? I wrote the following code:
Using System; using System. collections. generic; using System. linq; using System. text; namespace Example {class Program {static void Main (string [] args) {float a = 3.3334444333333f; // The fourth digit (one decimal point) starts with 4, the fifth digit is 4 double d = 9th; // The fifth digit is 4, and the 17 digit is 5 Console. writeLine (float c = {0} double f = {1}, a, d); // output indicates line feed }}}
The result is as follows:
Indeed, the 8th bits of float are rounded to 0 and omitted. The 16th bits of double are 4. Because the 17 bits are 5, 1 is added, and 0 is omitted ..