Just a few small basics to see:
in an int 32-bit hardware environment, the representation range for int is: -2 of 31 times to 2 31 times minus 1.
Reason: Because int is a signed type, the highest digit is the symbol bit, and the maximum number of positive numbers is: 01111111 11111111 11111111 11111111, or 2 31 times minus 1.
to look at the minimum, the original code for the 31-second side of 2 is 10000000 00000000 00000000 00000000, at which point the highest digits represent both a symbol and a numeric value. Its complement is obtained from 10000000 00000000 00000000 00000000. Also the highest level represents both a symbol and a value, that is, the original code and complement of 2 of the 31-second square are the same.
Try again-2 of 31 times minus 1, and its original code is (assuming no overflow) 11111111 11111111 11111111 11111111, minus 1 10000000 after 00000000 00000001, it turned out to be-1. Therefore, the representation range of int is: -2 of 31 times to 2 31 times minus 1 .
Then the shift operator:
int i = 1; i = i << 2;
This is to move I left two bits, the left-shift rule only remembers one point: discard sign bit, 0 complement lowest bit.
If the number of bits moved exceeds the maximum number of digits for that type, the compiler takes a modulus of the number of digits that are moved. If you move 33 bits to the int type, you actually only move the 33%32=1 bit.
There are two right shifts, arithmetic right (signed) >> and logical Move right (unsigned) >>>. The
arithmetic shifts the : symbol bit to the left and the symbol bit to the right. such as: 1000 1000 >> 3 for 1111 0001
logical Move Right : symbol bit move together, left to fill 0. such as: 1000 1000 >>> 3 to 0001 0001
Finally, the number of shifted digits cannot exceed the size of the data, not less than 0.