Python crawler 403 forbidden access error details, python403

Source: Internet
Author: User

Python crawler 403 forbidden access error details, python403

Python crawler solves 403 forbidden access error

When writing a crawler in Python, html. getcode () will encounter the 403 forbidden access problem. This is the website's prohibition of automated crawler. To solve this problem, you need to use the python urllib2 module.

The urllib2 module is an advanced crawler crawling module. There are many methods, for example, connecting url = http://blog.csdn.net/qysh123for this connection, there will be a 40-3 forbidden issue.

To solve this problem, follow these steps:

<span style="font-size:18px;">req = urllib2.Request(url) req.add_header("User-Agent","Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36") req.add_header("GET",url) req.add_header("Host","blog.csdn.net") req.add_header("Referer","http://blog.csdn.net/")</span> 

The User-Agent is a unique property of the browser. You can view the source code in the browser.

Then

html=urllib2.urlopen(req)print html.read()

You can download all the webpage code without the 403 forbidden access issue.

For the above problems, you can encapsulate them into functions for future calls. The specific code is as follows:

#-*-Coding: UTF-8-*-import urllib2 import random url = "http://blog.csdn.net/qysh123/article/details/44564943" my_headers = ["Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 "," Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) chrome/35.0.1916.153 Safari/537.36 "," Mozilla/5.0 (Windows NT 6.1; WOW64; rv: 30.0) Gecko/20100101 Firefox/30.0 "" Mozilla/5.0 (Macintosh; intel Mac OS X 10_9_2) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/537.75.14 "," Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Win64; x64; Trident/6.0) "] def get_content (url, headers): ''' @ get the webpage that is not allowed to access 403 ''' randdom_header = random. choice (headers) req = urllib2.Request (url) req. add_header ("User-Agent", randdom_header) req. add_header ("Host", "blog.csdn.net") req. add_header ("Referer", "http://blog.csdn.net/") req. add_header ("GET", url) content = urllib2.urlopen (req ). read () return content print get_content (url, my_headers)

The random function is used to automatically obtain User-Agent information of the browser type. In the custom function, you must write your own Host, Referer, and GET information, to solve these problems, you can access the service smoothly without any 403 access information.

Of course, if the Access frequency is too high, some websites will still be filtered out. To solve this problem, you need to use a proxy IP address... Specific Self-Solution

Thank you for reading this article. I hope it will help you. Thank you for your support for this site!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.