Analysis of 0xffffff in hexadecimal notation in C language !, 0 xffffff
Today, I saw a friend's question in the blog, which is roughly the hexadecimal number printed in the Network Program, with an inexplicable ffffff. For example, if the actual value of a byte is 0xc9, the printed value is 0xffffc9. Original Bo Q connection: http://q.cnblogs.com/q/71073/
In fact, similar problems do not occur only in network programs. Let's look at the sample code:
1 #include <stdio.h> 2 int main() 3 { 4 char c = 0xc9; 5 printf("A:c = %2x\n",(unsigned char)c); 6 printf("B:c = %2x\n",c & 0xff); 7 printf("C:c = %2x\n",c); 8 return 0; 9 }
The program output is as follows:
A:c = c9B:c = c9C:c = ffffffc9
You can see:
It is correct to convert c to unsigned char. As case.
Print c and 0xff correctly. B.
If c is not processed, the problem is reproduced and ffffc9 is printed. C.
Case a B is A solution from Baidu to solve the phenomenon C. Now, let's analyze and explain the three situations of ABC one by one.
First, we must know that the % x (X) output by the printf () function is in hexadecimal format of the Int type. Therefore, c variables of the char type are converted to the Int type.
Second, we know that computers use supplementary codes to represent data. For the original code, anti-code, and supplemental code knowledge, please recharge your own.
Case C:
C complement: 11001001 (0xc9 ).
C: 11001000 (0xc9 ).
Source code of c: 10110111 (0xc9 ).
Because the char type is signed, the first of the highest bits is regarded as a negative number.
Convert c to Int type char -----> Int
The source code of Int_c: 10000000 00000000 00000000 00110111 (the highest bit 1 of the c source code is mentioned to the highest bit. Add 0 for other high positions ).
Int_c anti-code: 11111111 11111111 11111111 11001000
Int_c complement: 11111111 11111111 11111111 11001001 (0xffffffc9 ).
Therefore, it is reasonable to print out seemingly strange values. How to avoid it? Check the situation of AB.
Case B:
Based on the situation C, we will perform operations on c and 0xff.
Int_c complement: 11111111 11111111 11111111 11001001 (0xffffffc9 ).
&
00000000 00000000 00000000 11111111
The final result is 00000000 00000000 00000000 11001001 (0xc9 ).
Case:
I think the method for handling situation A is the most formal solution, but it is said that the Linux kernel is used (& 0xff ).
C complement: 11001001 (0xc9 ).
C: 11001001 (0xc9 ).
Source code of c: 11001001 (0xc9 ).
Here, the type of c is forcibly converted to unsigned char. Therefore, the 1 of the highest digit is not a plus or minus sign.
Convert c to Int type char -----> Int
The source code of Int_c: 00000000 00000000 00000000 11001001 (the highest bit 1 of the c source code is mentioned to the highest bit. Add 0 for other high positions ).
Int_c anti-code: 00000000 00000000 00000000 11001001
Int_c completion code: 00000000 00000000 00000000 11001001 (0xc9 ).
Therefore, the print is normal.
If any of the above analyses is incorrect, please correct them.