1. ASCII code
We know that in a computer, all information is eventually represented as a binary string. Each binary bit has two states: 0 and 1. Therefore, eight binary bits can combine 256 states, which is called a byte ). That is to say, a single byte can be used to represent 256 different States. Each State corresponds to one symbol, that is, 256 symbols, from 0000000 to 11111111.
In the 1960s s, the United States developed a set of character codes to define the relationship between English characters and binary characters. This is called ASCII code, which has been used till now.
The ASCII code consists of a total of 128 characters. For example, the space is 32 (Binary 00100000), and the uppercase letter A is 65 (Binary 01000001 ). These 128 symbols (including 32 control symbols that cannot be printed) only occupy the last seven digits of one byte, and the first one digit is set to 0.
2. Non-ASCII Encoding
It is enough to encode English with 128 symbols, but it is not enough to represent other languages. For example, if there is a phonetic symbol above a letter in French, it cannot be represented by ASCII code. As a result, some European countries decided to use the idle highest bit in the byte to encode the new symbol. For example, E in French is encoded as 130 (Binary 10000010 ). In this way, the encoding systems used by these European countries can represent a maximum of 256 symbols.
However, there are new problems. Different countries have different letters. Therefore, even if they all use 256 characters, they represent different letters. For example, 130 represents é in French encoding, but in Hebrew encoding represents the letter gimel (delimiter). In Russian encoding, it represents another symbol. However, in all these encoding methods, 0-represents the same symbol, but the difference is only the 128-255.
As for Asian countries, more characters are used, and about 0.1 million Chinese characters are used. A single byte can only represent 256 types of symbols. It must be expressed by multiple bytes. For example, the common encoding method for simplified Chinese is gb2312, which uses two bytes to represent a Chinese character. Therefore, it can theoretically represent a maximum of 256x256 = 65536 characters.
The issue of Chinese encoding needs to be discussed in a specific article. This note does not cover this issue. It is only pointed out that although multiple bytes are used to represent a symbol, the Chinese character encoding of the GB class has nothing to do with the Unicode and UTF-8 of the subsequent text.
3. Unicode
As mentioned in the previous section, there are multiple encoding methods in the world. The same binary number can be interpreted as different symbols. Therefore, to open a text file, you must know its encoding method. Otherwise, garbled characters may occur when you use an incorrect encoding method. Why do emails often contain garbled characters? It is because the sender and receiver use different encoding methods.
As you can imagine, if there is an encoding, all the symbols in the world will be included. Every symbol is given a unique encoding, so the garbled problem will disappear. This is Unicode, as its names all represent. This is the encoding of all symbols.
Unicode is, of course, a large collection. The current size can contain more than 1 million characters. Each symbol is encoded differently. For example, U + 0639 represents the Arabic letter ain, U + 0041 represents the English capital letter A, and U + 4e25 represents the Chinese character "strict ". You can query a specific symbol table at unicode.org or a special Chinese character table.
4. Unicode Problems
It should be noted that Unicode is only a symbolic set, which only specifies the binary of the symbolCodeBut does not specify how the binary code should be stored.
For example, the Unicode Character "strict" is a hexadecimal number of 4 E25, and the number of bytes converted to binary is 15 (100111000100101). That is to say, the representation of this symbol requires at least two bytes. It indicates other larger symbols. It may take 3 or 4 bytes, or even more.
There are two serious problems here. The first problem is, how can we distinguish Unicode and ASCII? How does a computer know that three bytes represent one symbol instead of three symbols? The second problem is that we already know that only one byte is enough for English letters. If Unicode is uniformly defined, each symbol is represented by three or four bytes, therefore, two to three bytes in front of each English letter must be 0, which is a huge waste for storage. Therefore, the size of the text file is two or three times larger, which is unacceptable.
The result is: 1) There are multiple Unicode storage methods, that is, there are many different binary formats that can be used to represent Unicode. 2) Unicode cannot be promoted for a long time until the emergence of the Internet.
5. UTF-8
With the popularity of the Internet, a unified encoding method is strongly required. UTF-8 is the most widely used Unicode implementation method on the Internet. Other implementations also include UTF-16 and UTF-32, but are basically not needed on the Internet. Repeat, the relationship here is that UTF-8 is one of the Unicode implementation methods.
The biggest feature of UTF-8 is that it is a variable length encoding method. It can use 1 ~ The four bytes indicate a symbol, and the length of the byte varies according to different symbols.
UTF-8 coding rules are very simple, only two:
1) for a single-byte symbol, the first byte is set to 0, and the last seven digits are the Unicode code of this symbol. Therefore, for English letters, the UTF-8 encoding and ASCII code are the same.
2) for the n-byte symbol (n> 1), the first N bits of the first byte are set to 1, and the N + 1 bits are set to 0, the first two bytes are set to 10. The remaining unmentioned binary bits are all Unicode codes of this symbol.
The following table summarizes the encoding rules. The letter X indicates the available encoding bits.
Unicode symbol range | UTF-8 encoding method
(Hexadecimal) | (Binary)
-------------------- + ---------------------------------------------
0000 0000-0000 007f | 0 xxxxxxx
0000 0080-0000 07ff | 110 XXXXX 10 xxxxxx
0000 0800-0000 FFFF | 1110 XXXX 10 xxxxxx 10 xxxxxx
0001 0000-0010 FFFF | 11110xxx 10 xxxxxx 10 xxxxxx 10 xxxxxx
Next, we take Chinese characters "strict" as an example to demonstrate how to implement UTF-8 encoding.
It is known that the Unicode of "strict" is 4e25 (100111000100101). According to the above table, we can find that 4e25 is in the range of the third row (0000-0800 FFFF ), therefore, the "strict" UTF-8 encoding requires three bytes, that is, the format is "1110 XXXX 10 xxxxxx 10xxxxxx ". Then, starting from the last binary bit of "strict", fill in X in the format from the back to the front, and fill the extra bit with 0. In this way, the "strict" UTF-8 code is "11100100 10111000 10100101", converted to hexadecimal is e4b8a5.
6. Conversion between Unicode and UTF-8
Through the example in the previous section, we can see that the "strict" Unicode code is 4e25, The UTF-8 code is e4b8a5, the two are different. The conversion between them can be achieved throughProgram.
On the Windows platform, there is one of the simplest transformations. Instead, you can use the built-in deployment mini-program notepad.exe. After opening the file, click the "Save as" command in the "file" menu to pop up a dialog box with a "encoding" drop-down at the bottom.
There are four options: ANSI, Unicode, Unicode big endian and UTF-8.
1) ANSI is the default encoding method. English files are ASCII encoded files, while simplified Chinese files are gb2312 encoded files (only for Windows Simplified Chinese versions, if they are traditional Chinese versions, big5 codes will be used ).
2) unicode encoding refers to the UCS-2 encoding method, that is, directly using two bytes into the character Unicode code. This option uses the little endian format.
3) Unicode big endian encoding corresponds to the previous option. In the next section, I will explain the meanings of little endian and big endian.
4) UTF-8 coding, that is, the encoding method mentioned in the previous section.
After selecting the encoding method, click the Save button to convert the file encoding method immediately.
7. little endian and big endian
As mentioned in the previous section, Unicode codes can be stored directly in UCS-2 format. Take the Chinese character "Yan" as an example. The Unicode code is 4e25 and needs to be stored in two bytes. one byte is 4E and the other byte is 25. During storage, 4e is in the front, 25 is in the back, that is, the big endian mode; 25 is in the front, and 4E is in the little endian mode.
These two odd names are from the English writer Swift's gulliver Travel Notes. In this book, a civil war broke out in the country of small people. The reason for the war was people's debate about whether to break out from big-Endian or from Little-Endian when eating eggs. There were six wars in front and back for this purpose. One emperor gave his life and the other emperor lost his throne.
Therefore, the first byte is in front of the "Big endian", and the second byte is in front of the "little endian ).
Naturally, a problem arises: how does a computer know which encoding method is used for a file?
As defined in the Unicode specification, each file is preceded by a character indicating the encoding order. The character is called "Zero Width, non-line feed space" (Zero Width, no-break space ), expressed in feff. This is exactly two bytes, and FF is 1 larger than Fe.
If the first two bytes of a text file are Fe ff, it indicates that the file adopts the big header mode. If the first two bytes are FF Fe, it indicates that the file adopts the Small Header mode.
8. Instance
The following is an example.
Open the program notepad.exe, create a text file, the content is a "strict" word, in turn using ANSI, Unicode, Unicode big endian and UTF-8 encoding to save.
Then, use the "hexadecimal function" in the text editing software ultraedit to observe the internal encoding mode of the file.
1) ANSI: The file encoding is the two-Byte "D1 CF", which is the "strict" gb2312 encoding, which also implies that gb2312 is stored in a large-headed manner.
2) UNICODE: the encoding is four bytes: "FF Fe 25 4E", where "ff fe" indicates that it is stored in a small header, and the actual encoding is 4e25.
3) Unicode big endian: the encoding format is four bytes: "Fe FF 4E 25". "Fe FF" indicates that it is stored as a large data source.
4) UTF-8: the encoding is six bytes "Ef bb bf E4 B8 A5", the first three bytes "Ef bb bf" indicates that this is UTF-8 encoding, the last three "e4b8a5" are "strict" encoding, and their storage sequence is consistent with the encoding sequence.