Inadvertently looking over the code, found a difficult to understand the code.
byte [] bs = digest.digest (origin.getbytes (Charset.forname (CharsetName))); for (int i = 0; i < bs.length; i++) { int c = bs[i] & 0xFF; if (C < 16) { Sb.append ("0"); } Sb.append (Integer.tohexstring (c)); return sb.tostring ();
BS is a byte array that is output after a string of MD5 is encrypted. It's hard for me to understand at first why I want to copy the bs[i]&oxff to the int type in the next loop.
Bs[i] is 8-bit binary, 0xFF into 8-bit binary is 11111111, then Bs[i]&0xff is not or bs[i] itself? Is it interesting?
And then I wrote a demo.
Package Jvmproject; Public class Test { publicstaticvoid main (string[] args) { byte newbyte[ten]; a[0]= -127; System.out.println (a[0]); int c = a[0]&0xff; System.out.println (c); }}
I first print a[0], after printing the A[0]&0xff value, originally I think the result should be-127.
But the results are really unexpected!
-127
129
What is it for? &0xff was wrong.
The landlord really does not understand Ah, and then to the complement that direction to think.
Remember that when learning computer principles, it is understood that the storage in the computer is stored using the twos complement.
Review the three concepts of the original code anti-code complement
For positive numbers (00000001), the first sign is the symbol bit, and the anti-code complement is itself
For negative numbers (100000001) The original code, the inverse code is the original code in addition to the symbol bit to take the inverse operation that is (111111110), the complement is the inverse code for +1 operations namely (111111111)
The concept is so simple.
When assigning 127 to a[0], a[0] as a byte type, its computer is stored with a complement of 10000001 (8 bits).
When you output A[0] as an int type to the console, the JVM makes a bitwise of the processing because the int type is 32 bits so the complement is 1111111111111111111111111 10000001 (32 bits). This 32-bit twos complement represents also-127.
Found no, although the binary complement stored behind the Byte->int computer was converted from 10000001 (8-bit) to 1111111111111111111111111 10000001 (32-bit) it is clear that the decimal digits of the two complement representations are still the same.
But I do byte->int conversions all the time just to keep the decimal consistency?
Not necessarily, huh? Like the files we get into a byte array, do we care what the decimal value of the byte array is? What we care about is the complement of binary storage behind it.
So you should be able to guess why the number of a byte type is &0xff and then assigned to the int type, the essential reason is to maintain the consistency of the twos complement.
When byte is to be converted to int, the high 24 bits are bound to be 1, so that the second complement is in fact inconsistent, and the &0XFF can set the high 24 bits to 0 and the low 8 bits to remain the same. This is done to ensure the consistency of the binary data.
Of course, the binary data is guaranteed, and if the binary is interpreted as byte and int, the value of the 10 binary must be different because the position of the symbol bit has changed.
As in Example 2, int c = a[0]&0xff; a[0]&0xff=1111111111111111111111111 10000001&11111111=000000000000000000000000 10000001, this value is 129,
So the output value of C is 129. Some people ask why the above equation A[0] is not 8 bits but 32 bits, because when the system detects that a byte may be converted to int, or that a byte is converted to an int after some operation, it expands the byte's memory space high 1 to 32 bits, and then participates in the operation.
Why is byte related to 0xFF?