In printf: If output is in%f format, 8 bytes will be output (scanf input,%f is 4 bytes)
When the parameter is in the stack if it is a float type or double directly into the stack 8 bytes, the output and subsequent output is no problem
However, if the parameter is less than 8 bytes and is not float: for example, int shor int, the symbol bit will be expanded to 4 bytes, but the output is 8 bytes, so the results of the other parameters will be read in the stack.
#include <stdio.h> #include <stdlib.h> #include <string.h>int main (int argc, const char *argv[]) { int a = 3; int b = 5; printf ("%f%d\n", A, b); return 0;}
The results of the GCC output under Linux are: 0.000000 134513801--and later this value is a useless value, such as int x; printf ("%d\n", X)--The output is: 134513801
The output under vs2013 is: 0.000000 0 vs in int x; printf ("%d\n", x) error occurs because X is uninitialized,
The above stack result is (hex):
Stack top Stack Bottom
03 00 00 00 05 00 00 00
The-->%f reads 8 bytes and reads the results of the B into the stack directly.
The total number of digits in the mantissa of symbol bit order
Short floating point 1 8 23 32
Long floating point 1 11 52 64
In 32-bit floating-point reading: read from memory 0x 03 00 00 00, because it is a small terminal mode, so it becomes 0x 00 00 00 03
0 000 0000 0 000 0000 0000 0000 0000 0011
The first 0 is the sign bit: Table positive
The 2nd to 9th bit is the order code, the 32-bit floating-point index offset is 127, so here is-127
followed by the decimal point: 1 before the decimal point
--(1 + 2^ (-22) + 2^ (-23)) * 2^ (-127)
Approaching 0;
In a 64-bit floating point number: The order offset is: 2^ (10)-1
Example 1:
#include <stdio.h> #include <stdlib.h> #include <string.h>int main (int argc, const char *argv[]) { int a = 3; int b = 0xFFFFFFFF; printf ("%f%d\n", A, b); return 0;}
In GCC, the result is:-nan 134513801-->-nan represents a negative infinity
In VS2013, the result is: -1. #QNAN0 0
In the stack of printf: the FF FF FF FF
-When reading by%f: FF FF FF FF 00 00 00 03
--Resolve the corresponding bit by floating point number!
Example 2:
#include <stdio.h> #include <stdlib.h> #include <string.h>int main (int argc, const char *argv[]) { int a = 0x10000003; int b = 0x40110000; printf ("%f%d\n", A, b); int c[2]; C[0] = 0x10000003; C[1] = 0x40110000; Double d = * (double*) C; printf ("%f\n", d); return 0;}
In GCC:
Analysis:
In the first printf stack: 03 00 00 10 00 00 11 40
-with%f read out the stack results: 0x 40 11 00 00 10 00 00 03
-->64 bit floating point resolution: 0100 0000 0001 0001 0000 0000 0000 0000 0001 0000 0000 0000 0000 0000 0000 0011
Sign bit: 0
Order code: 0000 0001-->11-minus offset: 2^ (1--) 2
decimal place: 0001 0000 0000 0000 0000 00 ....
Last Size: (1 + 2^ (-4) + 2^ (-52) + 2^ (-51) + 2^ (-24)) * 2^ (2)
-->2: + 2^ ( -2)-->4.250000
Example 3: Look in reverse
#include <stdio.h> #include <stdlib.h> #include <string.h>int main (int argc, const char *argv[]) { int a = 5; Float b = 3.0; printf ("%d%d\n", A, b); return 0;}
Output results: 5 0
b Although it is a 4-byte float type, it is converted to 8-byte fdouble at the time of the stack---the 3.0 representation of the >float type: 0x --> in Memory: 00 40 -->printf output when the stack becomes extended to 8 bytes of double type: xx, XX, 40--> at this time in the stack of printf: 00 , XX, 40--> output two times in%d format: 5 032-bit float type 32 code: & nbsp; 0 0000 0 100 0000 0000 0000 0000 000064-bit double 3 binary: 0 0000 0000 1000 0000 00 00 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 Contrast: 32 64 Sign Bit 0 0 Order Code: 0000 0 0000 0000 -->64 Bit after 32 bits added 3 0 mantissa : 100..... 100.... -->64 bit after 32 bits added 29 x 0
Summary: printf output 8 bytes in%f format, float type output when the stack to the printf to expand to a double type of 8 bytes! The return value of printf is the number of characters that actually control the output (not including the ' + ' at the end of the string),
Some problems of C-language printf output floating-point numbers