character encoding is the cornerstone of computer technology, to be proficient in using a computer, you must know a bit of character coding knowledge.
Character encoding concepts (personal understanding):
That is , the relationship between characters and bits . As for how to correspond that is to see what kind of character encoding !
Coding method Introduction (excerpt, the original address does not know):
There are many kinds of coding methods in calculation, such as ASCII, Iso-8859-1, GB2312, GBK, UTF-8, UTF-16 and so on. They can all be thought of as dictionaries, which prescribe the rules of conversion, and according to this rule, the computer can correctly represent our characters. The current encoding format is many, such as GB2312, GBK, UTF-8, UTF-16 These formats can represent a Chinese character, then we choose which encoding format to store Chinese characters? This will take into account other factors, whether the storage space is important or the efficiency of the coding is important. Based on these factors to correctly select the encoding format, the following is a brief introduction to these encoding formats.
1. ASCII code
We know that inside the computer, all the information is ultimately represented as a binary string. Each bits (bit) has 0 and 12 states, so eight bits can combine 256 states, which is called a byte. In other words, a byte can be used to represent 256 different states, each of which corresponds to a symbol, which is 256 symbols, from 0000000 to 11111111.
In the 60 's, the United States developed a set of character encodings, which made a uniform provision for the relationship between English characters and bits. This is known as ASCII code and has been used so far.
The ASCII code specifies a total of 128 characters, such as a space "space" is 32 (binary 00100000), the uppercase letter A is 65 (binary 01000001). These 128 symbols (including 32 control symbols that cannot be printed: 0~31 is a control character such as a newline carriage return delete, etc.), the 32~126 is a print character that can be entered by the keyboard and can be displayed), occupying only one byte of the back 7 bits, and the first 1-digit unification rule is 0.
2, non-ASCII encoding
It is enough to encode 128 symbols in English, but 128 symbols are not enough to represent other languages. For example, in French, where there is a phonetic symbol above the letter, it cannot be represented by an ASCII code. As a result, some European countries decided to use the highest bits of the bytes that were idle to incorporate new symbols. For example, the code for E in French is 130 (binary 10000010). In this way, the coding system used in these European countries can represent a maximum of 256 symbols.
However, there are new problems. Different countries have different letters, so even if they are encoded using 256 symbols, the letters are not the same. For example, 130 is represented in the French code, but in Hebrew it represents the letter Gimel (?), and in the Russian language, another symbol is represented in the code. But anyway, in all of these encodings, 0-127 represents the same symbol, and the difference is just 128-255 of this paragraph.
As for Asian countries, the use of symbols is more, the Chinese character is about 100,000. A byte can represent only 256 symbols, which is certainly not enough, and must be expressed using multiple bytes to express a symbol. For example, the common encoding method in Simplified Chinese is GB2312, which uses two bytes to represent a Chinese character, so it is theoretically possible to represent a maximum of 256x256=65536 symbols.
(1) iso-8859-1
128 characters is obviously not enough, so ISO organization on the basis of the ASCII code to develop a number of columns to extend the ASCII encoding, they are iso-8859-1~iso-8859-15, where iso-8859-1 covers most of the Western European language characters, Most widely used in all applications. Iso-8859-1 is still a single-byte encoding, which can represent a total of 256 characters.
(2) GB2312
Its full name is "the basic set of Chinese character encoding character set of information interchange", it is a double-byte encoding, the total encoding range is a1-f7, which from a1-a9 is the symbol area, a total of 682 symbols, from B0-f7 is the Chinese character area, contains 6,763 Chinese characters.
(3) GBK
The full name is called "Chinese character Code extension Code", is the National Technical Supervision Bureau for the Windows95 of the new Chinese character code specification, its appearance is to expand GB2312, add more Chinese characters, its coding range is 8140~fefe (remove xx7f) total 23,940 code bit, it can express 21,003 Chinese characters, its encoding is compatible with GB2312, that is, the Chinese character encoded with GB2312 can be decoded with GBK, and there will be no garbled characters.
(4) GB18030
The full name is the "Chinese character encoding character set for information interchange", which is a mandatory standard in China, it may be single-byte, double-byte or four-byte encoding, its encoding is compatible with GB2312 encoding, although this is the national standard, but the actual application system is not widely used.
(5) UTF-16
When it comes to UTF that must refer to Unicode (Universal Code Uniform Code), ISO tries to create a new hyper-language dictionary that all languages in the world can translate with each other. It is conceivable how complex this dictionary is, and the detailed specification of Unicode can refer to the corresponding documentation. Unicode is the basis for Java and XML, and the following details the way Unicode is stored in a computer.
UTF-16 specifically defines how Unicode characters are accessed in a computer. UTF-16 uses two bytes to represent the Unicode conversion format, this is a fixed-length representation, no matter what character can be expressed in two bytes, two bytes is 16 bit, so called UTF-16. UTF-16 represents a very handy character, with every two bytes representing a single character, which greatly simplifies operations in the case of string manipulation, which is a very important reason for Java to use UTF-16 as a character storage format for memory.
(6) UTF-8
UTF-16 Unified two bytes to represent a character, although the presentation is very simple and convenient, but also has its drawbacks, there are a large number of characters with a byte can be represented now to two bytes, storage space is magnified by one times, the current network bandwidth is very limited today, this will increase network traffic , and it's not necessary. The UTF-8 uses a variable-length technique, with different loadline lengths for each coded area. Different types of characters can be made up of 1~6 bytes.
UTF-8 has the following coding rules:
If one byte, the highest bit (8th bit) is 0, indicating that this is an ASCII character (00-7f). Visible, all ASCII encoding is already UTF-8.
If a byte, beginning with 11, the number of consecutive 1 implies the number of bytes of this character, for example: 110xxxxx means that it is the first byte of a double-byte UTF-8 character.
If a byte, starting with 10, indicates that it is not a first byte, it needs to be searched forward to get the first byte of the current character.
The issue of Chinese coding needs to be discussed in this article, which is not covered by this note. It is only pointed out that although a symbol is represented in multiple bytes, the Chinese character coding of the GB class is irrelevant to the Unicode and UTF-8.
3.Unicode
As mentioned in the previous section, there are many coding methods in the world, and the same binary numbers can be interpreted as different symbols. Therefore, if you want to open a text file, you must know its encoding, or in the wrong way to interpret the code, there will be garbled. Why do e-mails often appear garbled? It is because the sender and the recipient are using different encoding methods.
It can be imagined that if there is an encoding, all the symbols in the world are included. Each symbol is given a unique encoding, then the garbled problem disappears. This is Unicode, as its name indicates, which is an encoding of all symbols.
Unicode is of course a large collection, and now the scale can accommodate the 100多万个 symbol. Each symbol is encoded differently, for example, u+0639 means that the Arabic letter ain,u+0041 represents the capital letter of the English a,u+4e25 denotes the Chinese character "strict". The specific Symbol correspondence table, may query unicode.org, or the specialized Chinese character correspondence table.
4. Problems with Unicode
It is important to note that Unicode is just a set of symbols, which only specifies the binary code of the symbol, but does not specify how the binary code should be stored.
For example, the Chinese character "strict" Unicode is hexadecimal number 4E25, converted to a binary number is a full 15 bits (100111000100101), that is to say, the symbol of at least 2 bytes. Representing other larger symbols, it may take 3 bytes or 4 bytes, or more.
There are two serious problems here, and the first question is, how can you differentiate between Unicode and ASCII? How does the computer know that three bytes represents a symbol instead of three symbols? The second problem is that we already know that the English alphabet is only one byte to express enough, if Unicode uniform rules, each symbol with three or four bytes, then each letter must have two to three bytes is 0, which is a great waste for storage, the size of the text file will be two or three times times larger , it is unacceptable.
They result in: 1) There is a variety of Unicode storage methods, which means that there are many different binary formats that can be used to represent Unicode. 2) Unicode cannot be promoted for a long period of time until the advent of the Internet.
5.utf-8
The popularization of the Internet has strongly demanded the emergence of a unified coding method. UTF-8 is the most widely used form of Unicode implementation on the Internet. Other implementations include UTF-16 and UTF-32, but they are largely unused on the Internet. Again, the relationship here is that UTF-8 is one of the ways Unicode is implemented.
One of the biggest features of UTF-8 is that it is a variable-length coding method. It can use 1~4 bytes to represent a symbol, varying the length of a byte depending on the symbol.
The coding rules for UTF-8 are simple, with only two lines:
1) for a single-byte symbol, the first bit of the byte is set to 0, and the next 7 bits are the Unicode code for the symbol. So for the English alphabet, the UTF-8 encoding and ASCII code are the same.
2) for n-byte notation (n>1), the first n bits are set to 1, the n+1 bit is set to 0, and the first two bits of the subsequent bytes are set to 10. The rest of the bits are not mentioned, all of which are Unicode codes for this symbol.
The following table summarizes the encoding rules, and the letter x represents the bits that are available for encoding.
Unicode Symbol Range | UTF-8 Encoding Method
(hex) | (binary)
-----------------------+---------------------------------------------
0000 0000-0000 007F | 0xxxxxxx
0000 0080-0000 07FF | 110xxxxx 10xxxxxx
0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx
0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
Below, or take the Chinese character "Yan" as an example, demonstrates how to implement UTF-8 encoding.
Known as "Strict" Unicode is 4E25 (100111000100101), according to the table above, you can find 4E25 in the range of the third row (0000 0800-0000 FFFF), so "strict" UTF-8 encoding requires three bytes, that is, the format is " 1110xxxx 10xxxxxx 10xxxxxx ". Then, starting from the last bits of "Yan", the X in the format is filled in sequentially, and the extra bits complement 0. This gets, "strict" UTF-8 code is "11100100 10111000 10100101", converted into 16 binary is e4b8a5.
6. Conversion between Unicode and UTF-8
Using the example in the previous section, you can see that the Unicode code for "strict" is 4e25,utf-8 encoding is E4B8A5, and the two are not the same. The transitions between them can be implemented by the program.
Under the Windows platform, one of the simplest ways to convert is to use the built-in Notepad applet Notepad.exe. After opening the file, click "Save as" on the "File" menu, you will get out of a dialog box, at the bottom there is a "coded" drop-down bar.
There are four options: Ansi,unicode,unicode big endian and UTF-8.
1) ANSI is the default encoding method. For English documents is ASCII encoding, for the Simplified Chinese file is GB2312 encoding (only for the Windows Simplified Chinese version, if the traditional Chinese version will use the BIG5 code).
2) Unicode encoding refers to the UCS-2 encoding method, which is a Unicode code that is stored directly in characters with two bytes. This option uses the little endian format.
3) The Unicode big endian encoding corresponds to the previous option. In the next section I will explain the meaning of little endian and big endian.
4) UTF-8 encoding, which is the encoding method mentioned in the previous section.
After selecting the "Encoding mode", click "Save" button, the file encoding method will be converted immediately.
7. Little Endian and Big endian
As mentioned in the previous section, Unicode codes can be stored directly in the UCS-2 format. Take the Chinese character "Yan" for example, the Unicode code is 4E25, need to be stored in two bytes, one byte is 4E, the other byte is 25. Storage, 4E in front, 25 in the back, is the big endian way, 25 in front, 4E in the back, is little endian way.
The two quirky names come from the book of Gulliver's Travels by British writer Swift. In the book, the Civil War broke out in the small country, the cause of the war is people arguing, whether to eat eggs from the big Head (Big-endian) or from the head (Little-endian) knocked open. For this matter, the war broke out six times, one Emperor gave his life, and the other emperor lost his throne.
Therefore, the first byte in front, is the "Big endian", the second byte in front is the "small Head Way" (Little endian).
Then, naturally, there is a problem: How does the computer know which encoding to use for a particular file?
Defined in the Unicode specification, each file is preceded by a character that represents the encoding sequence, which is named "0-width non-newline space" (ZERO wide no-break space), denoted by Feff. This happens to be two bytes, and FF is 1 larger than FE.
If the first two bytes of a text file are Fe FF, it means that the file is in a large head, and if the first two bytes are FF FE, it means that the file is in a small way.
8. Example
Below, give an example.
Open Notepad program Notepad.exe, create a new text file, the content is a "strict" word, followed by Ansi,unicode,unicode big endian and UTF-8 encoding method to save.
Then, use the "hex feature" in the text editing software UltraEdit to see how the file is encoded internally.
1) ANSI: The encoding of the file is two bytes "D1 CF", which is the "strict" GB2312 coding, which also implies that GB2312 is stored in the big head way.
2) Unicode: Encoding is four bytes "ff fe 4E", where "FF fe" indicates a small head mode of storage, the true encoding is 4E25.
3) Unicode Big endian: The encoding is four bytes "Fe FF 4E 25", wherein "FE FF" indicates that the head is stored in the way.
4) UTF-8: The encoding is six bytes "EF BB bf E4 B8 A5", the first three bytes "EF BB bf" indicates that this is UTF-8 encoding, and after three "E4B8A5" is the specific code of "strict", its storage sequence is consistent with the encoding order.
something (above content to be perfected and verified): Unicode in Notepad is also a way to implement Unicode. After testing, UTF-8 way, a Chinese character accounted for three bytes in general, an English letter accounted for a byte, this and the landlord of the article UTF-8 the implementation of the way fit; in Unicode, the Chinese language accounts for two bytes. So more English words UTF-8 save space, Chinese more words Unicode save space.
Character encoding reprint and summary--to be perfected ...