The blog about Unicode and other coding issues, but did not involve the program, so this time with. NET to verify the last of these things.
Before proving those things, first make a list of the related categories and methods in. NET dealing with encoding, binary, 16, byte, and so on.
1.byte conversion from string (integers within those 255) (conversion between various systems)
Using the System.Convert category
String to Byte
Convert.tobyte (String,base)
Base:2 represents the binary, 8 for octal, 10 for decimal, 16 for hexadecimal (you want to enter 33, huh, abnormal)
This turns the string (0--255) into a byte
Convert.tobyte ("01000001", 2) turn into 65
Convert.tobyte ("255", 10) turn into 255
Convert.tobyte ("42", 16) turn into 66
Similarly,byte to string is also a convert class
Convert.ToString (Byte,base)
can also be converted to the corresponding binary representation of the string
With these two methods, it is easy for us to convert the 2,8,10,16 into one another.
Conversion between 2.char,int,long,boolean and byte[] (the storage status of these data in memory)
Using the System.bitconverter category
We all know that basic types such as Char,int,long are in bytes in memory, so it's OK to view their memory storage by using bitconverter.getbytes () directly.
You can then use Bitconverter.tostring (byte[]) to view it as a string (for example: f9-03 represents 2 bytes)
A string is made up of char, as long as a foreach (char in string) can see how the string is stored (experiments show that,string exists in memory with Unicode encoding, with the example below)