The difference between a single-precision floating-point number (float) and a double-precision floating-point number (double):
(1) different number of bytes in memory
* Single-precision floating-point number of 4 bytes in the machine
* Double-precision floating-point number occupies 8 bytes in the machine
(2) different number of valid digits
* Single-precision floating-point number 8 digits valid
* Double-precision floating-point number 16 digits valid
(3) The range of the indicated number is different
* Single-precision floating point number representation range: -3.40E+38 ~3.40e+38
Representation range for double-precision floating-point numbers: -1.79e+308~+1.79e+308
(4) The speed of processing in the program is different:
Generally speaking, the CPU handles single-precision floating-point numbers faster than the floating-point numbers that handle double-precision.
Difference between single and double precision in C #