In C, the default base data types are signed, and now we use char as an example to illustrate the difference between (signed) Char and unsigned char.
First in memory, Char is no different from unsigned char, it is a byte, the only difference is that the highest bit of char is the sign bit, so char can represent -127~127,unsigned Char has no sign bit, so it can represent 0~255, this good understanding , 8 bit, up to 256 cases, so it can represent 256 numbers anyway.
What is the difference between the actual use of the process? Mainly sign bit, but in the ordinary assignment, read and write files and network byte stream are no different, anyway is a byte, regardless of the highest bit is what, the final reading results are the same, but how you understand the highest bit, the screen above the display may not be the same.
the biggest difference between the two is: but we find that when we represent byte, we use unsigned char, which is why? First of all, we generally understand that byte has no symbolic bit, and more importantly, if the value of byte is assigned to a data type such as Int,long, the system does some extra work. If it is char, then the system considers the highest level to be the sign bit, and int may be 16 or 32 bits, then the top bit is extended (note that the assignment to unsigned int also expands) and if it is unsigned char, it will not expand. If the maximum is 0 o'clock, there is no difference between the two, if it is 1 o'clock, then there is a difference. The same can be deduced to other types, such as short, unsigned, and so on.
Specific can be seen by the following small example of the difference
Include <stdio.h>
void f (unsigned char v)
{
char C = v;
unsigned char UC = v;
unsigned int a = c, b = UC;
int i = c, j = UC;
printf ("----------------\ n");
printf ("%%c:%c,%c\n", C, UC);
printf ("%%x:%x,%x\n", C, UC);
printf ("%%u:%u,%u\n", A, b);
printf ("%%d:%d,%d\n", I, j);
}
int main (int argc, char *argv[])
{
f (0x80);
f (0x7F);
return 0;
}
The resulting output is as follows:
Results Analysis:
for (signed) Char, 0x80 is represented by a binary representation of 1000 0000, and when it is assigned to unsigned int or int as a char, the system considers the highest level to be the symbol bit, which expands the top bit. The 0x7f is represented as 0111 1111 with a binary, and the highest bit is 0, which does not expand.
For unsigned char, no expansion is done regardless of whether the highest bit is 0 or 1.
The essential difference between
Char and unsigned char