At noon today, I suddenly wanted to figure out the relationship between Unicode and UTF-8, so I began to look up information online.
As a result, this problem is more complicated than I thought. After lunch, we can see that the problem is fixed at AM.
Below are my notes, mainly used to sort out my own ideas. However, I try to make it easy to understand and hope it can be useful to other friends. After all, character encoding is the cornerstone of computer technology. To be familiar with computers, you must know a little bit about character encoding.
1. ASCII code
We know that in a computer, all information is eventually represented as a binary string. Each binary bit has two states: 0 and 1. Therefore, eight binary bits can combine 256 states, which is called a byte ). That is to say, a single byte can be used to represent 256 different States. Each State corresponds to one symbol, that is, 256 symbols, from 0000000 to 11111111.
In the 1960s s, the United States developed a set of character codes to define the relationship between English characters and binary characters. This is called ASCII code, which has been used till now.
The ASCII code consists of a total of 128 characters. For example, the space is 32 (Binary 00100000), and the uppercase letter A is 65 (Binary 01000001 ). These 128 symbols (including 32 control symbols that cannot be printed) only occupy the last seven digits of one byte, and the first one digit is set to 0.
2. Non-ASCII Encoding
It is enough to encode English with 128 symbols, but it is not enough to represent other languages. For example, if there is a phonetic symbol above a letter in French, it cannot be represented by ASCII code. As a result, some European countries decided to use the idle highest bit in the byte to encode the new symbol. For example, E in French is encoded as 130 (Binary 10000010 ). In this way, the encoding systems used by these European countries can represent a maximum of 256 symbols.
However, there are new problems. Different countries have different letters. Therefore, even if they all use 256 characters, they represent different letters. For example, 130 represents é in French encoding, but gimel (?) in Hebrew encoding (?), It represents another symbol in Russian encoding. However, in all these encoding methods, the 0--127 represents the same symbol, but the difference is only the 128--255 section.
As for Asian countries, more characters are used, and about 0.1 million Chinese characters are used. A single byte can only represent 256 types of symbols. It must be expressed by multiple bytes. For example, the common encoding method for simplified Chinese is gb2312, which uses two bytes to represent a Chinese character. Therefore, it can theoretically represent a maximum of 256x256 = 65536 characters.
The issue of Chinese encoding needs to be discussed in a specific article. This note does not cover this issue. It is only pointed out that although multiple bytes are used to represent a symbol, the Chinese character encoding of the GB class has nothing to do with the Unicode and UTF-8 of the subsequent text.
3. Unicode
As mentioned in the previous section, there are multiple encoding methods in the world. The same binary number can be interpreted as different symbols. Therefore, to open a text file, you must know its encoding method. Otherwise, garbled characters may occur when you use an incorrect encoding method. Why do emails often contain garbled characters? It is because the sender and receiver use different encoding methods.
As you can imagine, if there is an encoding, all the symbols in the world will be included. Every symbol is given a unique encoding, so the garbled problem will disappear. This is Unicode, as its names all represent. This is the encoding of all symbols.
Unicode is, of course, a large collection. The current size can contain more than 1 million characters. Each symbol is encoded differently. For example, U + 0639 represents the Arabic letter ain, U + 0041 represents the English capital letter A, and U + 4e25 represents the Chinese character "Yan ". You can query a specific symbol table at unicode.org or a special Chinese character table.
4. Unicode Problems
It should be noted that Unicode is only a collection of symbols. It only specifies the binary code of the symbol, but does not specify how the binary code should be stored.
For example, the Unicode of Chinese character "Yan" is a hexadecimal number of 4 E25, and the number of bytes converted to binary is 15 (100111000100101). That is to say, the representation of this symbol requires at least two bytes. It indicates other larger symbols. It may take 3 or 4 bytes, or even more.
There are two serious problems here. The first problem is, how can we distinguish Unicode and ASCII? How does a computer know that three bytes represent one symbol instead of three symbols? The second problem is that we already know that only one byte is enough for English letters. If Unicode is uniformly defined, each symbol is represented by three or four bytes, therefore, two to three bytes in front of each English letter must be 0, which is a huge waste for storage. Therefore, the size of the text file is two or three times larger, which is unacceptable.
The result is: 1) There are multiple Unicode storage methods, that is, there are many different binary formats that can be used to represent Unicode. 2) Unicode cannot be promoted for a long time until the emergence of the Internet.
5. UTF-8
With the popularity of the Internet, a unified encoding method is strongly required. UTF-8 is the most widely used Unicode implementation method on the Internet. Other implementations also include UTF-16 (characters are expressed in two or four bytes) and UTF-32 (characters are expressed in four bytes), but are basically not needed on the Internet.Repeat, the relationship here is that UTF-8 is one of the Unicode implementation methods.
The biggest feature of UTF-8 is that it is a variable length encoding method. It can use 1 ~ The four bytes indicate a symbol, and the length of the byte varies according to different symbols.
UTF-8 coding rules are very simple, only two:
1) for a single-byte symbol, the first byte is set to 0, and the last seven digits are the Unicode code of this symbol. Therefore, for English letters, the UTF-8 encoding and ASCII code are the same.
2) for the n-byte symbol (n> 1), the first N bits of the first byte are set to 1, and the N + 1 bits are set to 0, the first two bytes are set to 10. The remaining unmentioned binary bits are all Unicode codes of this symbol.
The following table summarizes the encoding rules. The letter X indicates the available encoding bits.
Unicode symbol range | UTF-8 encoding method
(Hexadecimal) | (Binary)
-------------------- + ---------------------------------------------
0000 0000-0000 007f | 0 xxxxxxx
0000 0080-0000 07ff | 110 XXXXX 10 xxxxxx
0000 0800-0000 FFFF | 1110 XXXX 10 xxxxxx 10 xxxxxx
0001 0000-0010 FFFF | 11110xxx 10 xxxxxx 10 xxxxxx 10 xxxxxx
According to the above table, the interpretation of UTF-8 encoding is very simple. If the first byte is 0, the byte is a single character. If the first byte is 1, the number of consecutive 1 represents the number of bytes occupied by the current character.
Next, take Chinese character "Yan" as an example to demonstrate how to implement UTF-8 encoding.
It is known that the Unicode of "strict" is 4e25 (100111000100101). According to the preceding table, we can find that 4e25 is in the range of the third row (0000-0800 FFFF ), therefore, the "strict" UTF-8 encoding requires three bytes, that is, the format is "1110 XXXX 10 xxxxxx 10 xxxxxx ". Then, from the last binary bit of "strict", enter X in the format from the back to the front, and fill the extra bit with 0. In this way, the "strict" UTF-8 code is "11100100 10111000 10100101", converted to hexadecimal is e4b8a5.
6. Conversion between Unicode and UTF-8
Through the example in the previous section, we can see that the Unicode code of "strict" is 4e25, The UTF-8 code is e4b8a5, the two are different. The conversion between them can be implemented through a program.
On the Windows platform, there is one of the simplest transformations. Instead, you can use the built-in deployment mini-program notepad.exe. After opening the file, click "Save as" in the "file" menu. A dialog box is displayed, with a "encoding" drop-down at the bottom.
There are four options: ANSI, Unicode, Unicode big endian and UTF-8.
1) ANSI is the default encoding method. English files are ASCII encoded files, while simplified Chinese files are gb2312 encoded files (only for Windows Simplified Chinese versions, if they are traditional Chinese versions, big5 codes will be used ).
2) unicode encoding refers to the UCS-2 encoding method, that is, directly using two bytes into the character Unicode code. This option uses the little endian format.
3) Unicode big endian encoding corresponds to the previous option. In the next section, I will explain the meanings of little endian and big endian.
4) UTF-8 coding, that is, the encoding method mentioned in the previous section.
After selecting "encoding method", click "save" to convert the file encoding method immediately.
7. little endian and big endian
As mentioned in the previous section, Unicode codes can be stored directly in UCS-2 format. Take the Chinese character "Yan" as an example. The Unicode code is 4e25 and needs to be stored in two bytes. one byte is 4E and the other byte is 25. During storage, 4e is in the front, 25 is in the back, that is, the big endian mode; 25 is in the front, and 4E is in the little endian mode.
These two odd names are from the English writer Swift's gulliver Travel Notes. In this book, a civil war broke out in the country of small people. The reason for the war was people's debate about whether to break out from big-Endian or from Little-Endian when eating eggs. There were six wars in front and back for this purpose. One emperor gave his life and the other emperor lost his throne.
Therefore, the first byte is in front of "Big endian", and the second byte is in front of "little endian ).
Naturally, a problem arises: how does a computer know which encoding method is used for a file?
As defined in the Unicode specification, a character indicating the encoding sequence is added at the beginning of each file. The name of this character is "Zero Width, non-line feed space" (Zero Width, no-break space ), expressed in feff. This is exactly two bytes, and FF is 1 larger than Fe.
If the first two bytes of a text file are Fe ff, it indicates that the file adopts the big header mode. If the first two bytes are FF Fe, it indicates that the file adopts the Small Header mode.
8. Instance
The following is an example.
Open notepad.exe, the Notepad program, and create a new text file. The content is a strict character, which is saved in sequence using ANSI, Unicode, Unicode big endian, and UTF-8 encoding.
Then, use the "hexadecimal function" in the text editing software ultraedit to observe the internal encoding mode of the file.
1) ANSI: The file encoding is two bytes: "D1 CF", which is exactly the "strict" gb2312 encoding. This also implies that gb2312 is stored in a big-headed manner.
2) UNICODE: the encoding is four bytes: "FF Fe 25 4E", where "ff fe" indicates that it is stored in Small Header mode, and the actual encoding is 4e25.
3) Unicode big endian: the encoding format is four bytes: "Fe FF 4E 25", and "Fe FF" indicates that it is stored in the big data storage mode.
4) UTF-8: the encoding is six bytes "Ef bb bf E4 B8 A5", the first three bytes "Ef bb bf" indicates this is UTF-8 encoding, the last three "e4b8a5" are "strict" encoding, and their storage sequence is consistent with the encoding sequence.
9. Extended reading
* The absolute minimum every software developer absolutely, positively must know about Unicode and character sets (basic knowledge about character sets)
* Unicode encoding
* Rfc3629: UTF-8, a transformation format of ISO 10646 (if UTF-8 is implemented)
(End)
[Reprint] http://www.ruanyifeng.com/blog/2007/10/ascii_unicode_and_utf-8.html
Character encoding notes: ASCII, Unicode and UTF-8