Python Crawl page time character set conversion problem handling scheme sharing _python

Source: Internet
Author: User
Tags in python

Questions raised:

Sometimes we collect Web pages, processed after the string to save the file or write to the database, this time need to develop a string encoding, if the acquisition of the page encoding is gb2312, and our database is utf-8, so do not do any processing directly into the database may be garbled (not tested, Do not know whether the database will automatically transcoding code), we need to manually convert gb2312 to Utf-8.

First of all, we know that the characters in Python are ASCII code, of course, English is no problem, when you meet the Chinese immediately kneel.

I don't know if you remember, Python, when you print Chinese characters, you need to precede the string with U:

Print U "to get a base?" "

In this way Chinese can be shown, which is the role of U to convert the string to Unicode code, so that Chinese can get the correct display.
This is associated with a Unicode () function, which is used as follows

Str= "to engage the base"
str=unicode (str, "Utf-8")
print str

The difference with you is that you convert STR to Unicode encoding in Unicode, and you need to specify the second parameter correctly, where the utf-8 is the file set of my test.py script, which is probably ANSI by default.
Unicode This is a key, continue below

We began to crawl Baidu home page, attention, visitors visit Baidu home page, view the source code, its charset=gb2312.

Import URLLIB2
def main ():
  f=urllib2.urlopen ("http://www.baidu.com")
  str=f.read ()
  Str=unicode ( STR, "gb2312")
  fp=open ("baidu.html", "W")
  Fp.write (Str.encode ("Utf-8"))
  fp.close ()

If __name__ = ' __MAIN__ ':
  Main ()

Explain:
We first use Urllib2.urlopen () method will be Baidu home crawl to, F is a handle, with Str=f.read () to read all the source code into STR

Find out, str inside is the HTML source code we crawl, because the default character set of the Web page is gb2312, so if we save it directly to the file, the file encoding will be ANSI.

For most people, that's enough, but sometimes I want to convert gb2312 to Utf-8.

First of all:
Str=unicode (str, "gb2312") #这里的gb2312就是str的实际字符集, we now convert it to Unicode

And then:
Str=str.encode ("Utf-8") #将unicode的字符串重新编码成utf-8

At last:

Writes STR to the file, opens the file to look at the encoding attribute, the discovery is Utf-8, the <meta charset= "gb2312" to <meta charset= "Utf-8", is a utf-8 webpage. Doing so much actually completes a gb2312->utf-8 transcoding.


Summarize:

Let's recall that if you need to save the string according to the specified character set, there are several steps to follow:

1: Use Unicode (str, "original encoding") to decode Str into a Unicode string

2: Convert Unicode string str to the character set you specify using Str.encode ("specified character set")

3: To save the file str, or write to the database, and so on, of course, the code you have specified, is not it?

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.