Represent "Han" with UTF-16"
If the UTF-16 is expressed as 01101100 01001001 (16 bits in total, two bytes). When the program parses the known is the UTF-16, the two bytes as a unit to parse. This is very simple.
Represent "Han" with UTF-8"
There is a complexity with UTF-8. at this time, the program reads one byte and one byte, then, identify whether one or two or three bytes should be processed as a unit based on the bit flag in the header.
0 xxxxxxx. If it is such a 01 string, it will start with 0, so you don't have to worry about it. XX represents any bit. indicates that a byte is used as a unit. it is exactly the same as ASCII.
110 xxxxx 10xxxxxx. In this format, take two bytes as one unit
1110 xxxx 10 xxxxxx 10 xxxxxx if this format is used, three bytes are used as a unit.
This is an agreed rule. you must follow this rule when expressing it in UTF-8. we know that UTF-16 does not need to use what character to mark, so two bytes is 2 16 times can represent 65536 characters.
Because of the extra sign information in the UTF-8, all one byte can only represent 2 7 to 128 characters, two bytes can only represent 2 to 11 to 2048 characters. the three-character energy saving means the power of 2 to the power of 16, 65536 characters.
Because the encoding of "Han" is greater than 27721 and all two bytes are not enough, it can only be represented by three bytes.
All must use the format 1110 xxxx 10 xxxxxx 10xxxxxx. fill in the XXX symbol (not necessarily from left to right, or from right to left) for the binary value of 27721. This involves another problem. yes.
The filling method can be different, so the Big-Endian and Little-Endian terms appear. Big-Endian is left-to-right, and Little-Endian is right-to-left.
From the above we can see that the UTF-8 needs to judge the beginning of each byte mark information, so when a byte in the transfer process error, it will cause the subsequent bytes will also parse error. and the UTF-16 will not judge the starting sign, even if the error will only be wrong one character, so strong fault tolerance.
Differences between UTF-8 and UTF-16