This section studies the problems related to numerical values in the information representation;
BYTE order and address growth direction stack variables
The code snippet is as follows:
int x = 0x10203040; char* p = (char*) &x; printf ("%p%p%p\n", &x, p, &p); printf ("%x%p%p\n", p[0], &p[0], p); printf ("%x%p%p\n", p[1], &p[1], p+1); printf ("%x%p%p\n", p[2], &p[2], p+2); printf ("%x%p%p\n", p[3], &p[3], p+3);
The output is as follows:
0xbfee6bac 0xbfee6bac 0xbfee6ba840 0xbfee6bac 0xbfee6bac30 0xbfee6bad 0xbfee6bad20 0xbfee6bae 0xbfee6bae10 0xbfee6baf 0xbfee6baf
As follows:
Explain the points:
(1) The memory address variable of the stack in Linux is the address growth direction is downward ;
(2) The byte order in Linux is small , because the low data 0x40 bit of int x is on the lowest address 0XBFF7CCCC;
Variables in data area
The code snippet is as follows:
int x = 0x10203040;char* p = (char*) &x; int main (void) { printf ("%p%p%p\n", &x, p, &p); printf ("%x%p%p\n", p[0], &p[0], p); printf ("%x%p%p\n", p[1], &p[1], p+1); printf ("%x%p%p\n", p[2], &p[2], p+2); printf ("%x%p%p\n", p[3], &p[3], p+3); return 0;}
The output is as follows:
0x8049810 0x8049810 0x804981440 0x8049810 0x804981030 0x8049811 0x804981120 0x8049812 0x804981210 0x8049813 0x8049813
As follows:
Explain the points:
(1) The Stack memory address variable in Linux is the address growth direction is upward ;
(2) The byte order in Linux is small , because the low data 0x40 bit of int x is on the lowest address 0x9049810;
(3) In fact, the address growth direction of the heap is also upward;
Shift
(1) for the shift of the signed number is called arithmetic movement, the shift of the unsigned number becomes the logical movement;
(2) Whether it is signed number or unsigned number, the logical left shift and arithmetic left shift rules are the same: the symbol bit (the highest bit) by the secondary high-level, the lowest bit in turn 0;
(3) signed number, arithmetic right shift:
sign bit (positive complement 0, negative 1)Will be in turn high-level complement, unsigned number, logical right SHIFT, high-0;
The code snippet is as follows:
int x = 0x7FFFFFFF; cout << hex << x << Endl; cout << Dec << x << Endl; x <<= 1; cout << hex << x << Endl; cout << Dec << x << Endl;
The output is as follows:
7fffffff2147483647fffffffe-2
Explain the points
(1) Signed positive shift left, verify that the code can be: the initial value of x is the largest positive 7fffffff, that is, the decimal 2147483647;x to the left 1 bits, X to 0xFFFFFFFE (complement), to negative-2;
The code snippet is as follows:
int x = 0x80000001; cout << hex << x << Endl; cout << Dec << x << Endl; x <<= 1; cout << hex << x << Endl; cout << Dec << x << Endl;
The output is as follows:
80000001-214748364722
Explain the points
(1) Signed negative number shift left, verify that the code can be: the initial value of x is a negative 0x80000001 (complement), that is, decimal-2147483647;
x moves 1 bits to the left, X changes to 0x2, and decimal 2;
The code snippet is as follows:
int x = 0x80000000; cout << hex << x << Endl; cout << Dec << x << Endl; x >>= 1; cout << hex << x << Endl; cout << Dec << x << Endl; x >>= 1; cout << hex << x << Endl; cout << Dec << x << Endl;
The output is as follows:
80000000-2147483648c0000000-1073741824e0000000-536870912
Explain the points:
(1) Signed negative number to the right, verify that the code can be: the initial value of X is negative (0x80000000), that is, the decimal -2147483648;x 1 bits to the right, the x becomes 0xc0000000, the decimal -1073741824;x continues to move 1 bits to the right, X becomes e0000000, decimal-536870912;
The code snippet is as follows:
unsigned int x = 0x80000000; cout << hex << x << Endl; cout << Dec << x << Endl; x >>= 1; cout << hex << x << Endl; cout << Dec << x << Endl; x >>= 1; cout << hex << x << Endl; cout << Dec << x << Endl;
The output is as follows:
80000000214748364840000000107374182420000000536870912
Explain the points:
(1) Unsigned negative number to the right, verify that the code can be: the initial value of x is unsigned (0x80000000), that is, the decimal 2147483648;x 1 bits to the right, X to 40000000, the decimal 1073741824;x continues to move 1 bits to the right, X becomes 20000000, decimal 536870912;
Overflow problem is overflowing
The code snippet is as follows:
int a = 0x7fffffff; int b = 1; unsigned int c = a + B; The representation of bits in memory is int d = a + B; cout << hex << a << "" << b << "" << C << "" << D << Endl; cout << Dec << a << "" << b << "<< C <<" "<< D << Endl;
The output is as follows:
7FFFFFFF 1 80000000 800000002147483647 1 2147483648-2147483648
Explain the points:
(1) Positive overflow: A and B are signed positive, using the complement algorithm to get a+b=0x80000000,c,d and refers to only the bits in memory interpretation, C is unsigned number, 2147483648, and D is an unsigned number, 2147483648;
Negative overflow
The code snippet is as follows:
int a = 0x80000000; int b =-1; unsigned int c = a + B; The representation of bits in memory is int d = a + B; cout << hex << a << "" << b << "" << C << "" << D << Endl; cout << Dec << a << "" << b << "<< C <<" "<< D << Endl;
The output is as follows:
80000000 ffffffff 7fffffff 7fffffff-2147483648-1 2147483647 2147483647
Explain the points:
(1) Negative overflow: A and B are negative, using the complement of the operation rules for 0x80000000+0xffffffff=0x7fffffff;c and D to 0x7ffffff interpretation is the same, are the decimal 2147483647;
Symbol bit extension
The code snippet is as follows:
Short x = 0xff90; Decimal -112 cout << "x:" << hex << x << "" << Dec << x << Endl; unsigned short a = x; Different interpretations of the same bits cout << "A:" << hex << a << "<< Dec << a << Endl; int b = x; Sign bit extension, complement 1 cout << "B:" << hex << B << "" << Dec << b << Endl; unsigned int c = x; X first converted to int,int converted to unsigned int cout << "C:" << hex << C << "" << Dec << c << ; Endl; int d = A; A is an unsigned number, or 0 cout << "D:" << hex << D << "" << Dec << d << Endl; unsigned int e = A; A is an unsigned number, or 0 cout << "e:" << hex << e << "" << Dec << e << Endl;
The output is as follows:
X:ff90-112a:ff90 65424b:ffffff90-112c:ffffff90 4294967184d:ff90 65424e:ff90 65424
Explain the points:
(1) The number of symbols and the conversion of unsigned numbers of the same length are processed, and the interpretation pattern of bit-level is invariable; a=x;
(2) unsigned int c = x; x is converted to int, then int to unsigned int, not x first converted to unsigned short and then the sign bit is expanded;
(3) for a to expand to D and E; Since A is an unsigned number, it is only an extension of 0 expansion , only the symbol expansion is only when the symbol number expands (positive 0, negative complement 1);
Truncate numbers
The code snippet is as follows:
int x = 0xfffda218; -155112cout << hex << x << "" << Dec << x << Endl;short a = x;//regardless of sign bit, explanation of truncated bits C Out << hex << a << ' "" << Dec << a << endl;unsigned short B = x;//regardless of sign bit, explanation of truncated bits Co UT << hex << b << "" << Dec << b << endl;int c = A; It is possible that the accuracy is lost, at which point the sign bit expands according to the sign bit of a cout << hex << C << "" << Dec << c << Endl;
The output is as follows:
fffda218-155112a218-24040a218 41496ffffa218-24040
Explain the points:
(1) truncation of numbers is simply a new interpretation of the truncated bit-level pattern;
(2) c = A; The sign bit is extended according to the sign bit of A;
"Information representation" value