The following small series will give you a brief introduction to the encoding processing for crawling web pages using Python. I think this is quite good. now I will share it with you and give you a reference. Let's take a look at it with Xiaobian.
Background
During the Mid-Autumn Festival, a friend sent me an email saying that when he was crawling his house, he found that the code returned from the webpage was garbled and asked me to help his adviser (working overtime during the Mid-Autumn Festival, really dedicated = !), In fact, I have encountered this problem for a long time. I read it a little before when I was crawling a novel, but I didn't take it seriously. In fact, this problem is caused by a poor understanding of coding.
Problem
A common crawler code is as follows:
# ecoding=utf-8import reimport requestsimport sysreload(sys)sys.setdefaultencoding('utf8')url = 'http://jb51.net/ershoufang/rs%E6%8B%9B%E5%95%86%E6%9E%9C%E5%B2%AD/'res = requests.get(url)print res.text
The purpose is actually very simple, it is to crawl the contents of the chain house. However, after this execution, all the returned results involving Chinese content will become garbled, such