Character encoding and Python encoding interpretation

Source: Internet
Author: User
Tags ord

This article reproduced, for the purpose of learning, read this article on the coding understanding is Taichetaiwu Ah, tangled up the code of the day, this article is God, to share to everyone to learn

Original link: http://blog.csdn.net/duqi_yc/article/details/22312983

Character encoding

As we've already said, strings are also a type of data, but a special string is a coding problem.

Because a computer can only handle numbers, if you are working with text, you must convert the text to a number before processing it. The oldest computer was designed with 8 bits (bit) as a byte (byte), so a single word energy-saving representation of the largest integer is 255 (binary 11111111 = decimal 255), if you want to represent a larger integer, you must use more bytes. For example, two bytes can represent the largest integer is 65535 , 4 bytes can represent the largest integer is 4294967295 .

Since the computer was invented by the Americans, only 127 letters were encoded into the computer, that is, letters, numbers, and symbols, which are called ASCII encodings, such as uppercase letters encoded in A 65 lowercase letters z 122 .

But to deal with the Chinese is clearly a byte is not enough, at least two bytes, but also can't and ASCII encoding conflict, so, China has developed a GB2312 code to put Chinese into.

What you can imagine is that there are hundreds of languages all over the world, Japan has made it Shift_JIS in Japanese, South Korea has made it into the Korean language, and Euc-kr countries have standards that inevitably clash, and the result is that in the mixed text of multiple languages, there will be garbled characters.

As a result, Unicode emerges. Unicode unifies all languages into a set of encodings, so there is no more garbled problem.

The Unicode standard is also evolving, but it is most commonly used to represent a character in two bytes (4 bytes If a very remote character is used). Unicode is supported directly by modern operating systems and most programming languages.

Now, the difference between ASCII encoding and Unicode encoding is smoothed: ASCII encoding is 1 bytes, and Unicode encoding is usually 2 bytes.

Letters A with ASCII encoding are decimal 65 , binary 01000001 ;

Characters 0 with ASCII encoding are decimal 48 , binary 00110000 , and note that ‘0‘ the characters and integers 0 are different;

Chinese characters are beyond the ASCII encoding range, Unicode encoding is decimal 20013 , binary 01001110 00101101 .

You can guess that if you encode ASCII code in A Unicode, you just need to make 0 in front, so A the Unicode encoding is 00000000 01000001 .

The new problem arises again: If Unicode encoding is unified, the garbled problem disappears. However, if you write text that is basically all in English, using Unicode encoding requires more storage space than ASCII encoding, which is not cost-effective in storage and transmission.

Therefore, in the spirit of saving, there has been the conversion of Unicode encoding to "Variable length encoding" UTF-8 encoding. The UTF-8 encoding encodes a Unicode character into 1-6 bytes according to a different number size, the commonly used English letter is encoded in 1 bytes, the kanji is usually 3 bytes, and only the very uncommon characters are encoded into 4-6 bytes. If the text you want to transfer contains a large number of English characters, you can save space with UTF-8 encoding:

character ASCII Unicode UTF-8
A 01000001 00000000 01000001 01000001
In X 01001110 00101101 11100100 10111000 10101101

It can also be found from the table above that the UTF-8 encoding has an added benefit that ASCII encoding can actually be seen as part of the UTF-8 encoding, so a large number of legacy software that only supports ASCII encoding can continue to work under UTF-8 encoding.

Figuring out the relationship between ASCII, Unicode, and UTF-8, we can summarize how the current computer system works with character encoding:

In computer memory, Unicode encoding is used uniformly, and is converted to UTF-8 encoding when it needs to be saved to the hard disk or when it needs to be transferred.

When editing with Notepad, the UTF-8 characters read from the file are converted to Unicode characters into memory, and when the edits are complete, the conversion of Unicode to UTF-8 is saved to the file:

When you browse the Web, the server converts dynamically generated Unicode content to UTF-8 and then to the browser:

So you see a lot of pages of the source code will have similar <meta charset="UTF-8" /> information, that the page is exactly the UTF-8 encoding.

A Python string

After figuring out the annoying character coding problem, we'll look at Python's support for Unicode.

Because Python was born earlier than the Unicode standard, the earliest Python only supported ASCII encoding, and ordinary strings were ‘ABC‘ ASCII-encoded inside python. Python provides the Ord () and Chr () functions to convert letters and corresponding numbers to each other:

>>> ord(‘A‘)65>>> chr(65)‘A‘

Python later added support for Unicode, expressed in Unicode as a string u‘...‘ , for example:

>>> print u‘中文‘中文>>> u‘中‘u‘\u4e2d‘

Written u‘中‘ and u‘\u4e2d‘ is the same, \u followed by a hexadecimal Unicode code. So, u‘A‘ and u‘\u0041‘ the same.

How do two strings convert to each other? Although the string ‘xxx‘ is ASCII encoded, it can also be seen as UTF-8 encoding, and u‘xxx‘ only Unicode encoding.

u‘xxx‘how to convert to UTF-8 encoding ‘xxx‘ encode(‘utf-8‘) :

>>> u‘ABC‘.encode(‘utf-8‘)‘ABC‘>>> u‘中文‘.encode(‘utf-8‘)‘\xe4\xb8\xad\xe6\x96\x87‘

The UTF-8 value represented by the English character conversion is equal to the Unicode value (but with a different storage space), and the Chinese character converts 1 Unicode characters into 3 UTF-8 characters, and you see one of \xe4 them because its value is 228 , No corresponding letters can be displayed, so the numeric value of the bytes is displayed in hexadecimal. len()the function can return the length of the string:

>>> len(u‘ABC‘)3>>> len(‘ABC‘)3>>> len(u‘中文‘)2>>> len(‘\xe4\xb8\xad\xe6\x96\x87‘)6

In turn, convert the string represented by the UTF-8 encoding ‘xxx‘ to a Unicode string u‘xxx‘ using the decode(‘utf-8‘) method:

>>> ‘abc‘.decode(‘utf-8‘)u‘abc‘>>> ‘\xe4\xb8\xad\xe6\x96\x87‘.decode(‘utf-8‘)u‘\u4e2d\u6587‘>>> print ‘\xe4\xb8\xad\xe6\x96\x87‘.decode(‘utf-8‘)中文

Because the Python source code is also a text file, so when your source code contains Chinese, it is important to specify that you save it as UTF-8 encoding when you save it. When the Python interpreter reads the source code, in order for it to be read by UTF-8 encoding, we usually write these two lines at the beginning of the file:

#!/usr/bin/env python# -*- coding: utf-8 -*-

The first line of comments is to tell the Linux/os x system that this is a python executable and the Windows system ignores this comment;

The second line of comments is to tell the Python interpreter to read the source code according to the UTF-8 encoding, otherwise the Chinese output you write in the source code may be garbled.

Character encoding and Python encoding interpretation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.