403 wrong solution for Python crawler

Source: Internet
Author: User
Tags 403 forbidden error
This article mainly describes the Python crawler to solve the 403 Forbidden error related information, the need for friends can refer to the following

Python crawler solves 403 Forbidden Error

In Python writing crawler, Html.getcode () will encounter 403 forbidden problems, this is the site of the automatic crawler ban, to solve this problem, need to use the Python module URLLIB2 module

URLLIB2 module belongs to an advanced crawler crawl module, there are a lot of methods, such as connection url=http://blog.csdn.net/qysh123 for this connection there is a possibility of 403 forbidden access problem

To solve this problem, the following steps are required:

<span style= "FONT-SIZE:18PX;" >req = Urllib2. Request (URL) req.add_header ("User-agent", "mozilla/5.0" (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/39.0.2171.95 safari/537.36 ") req.add_header (" GET ", url) req.add_ Header ("Host", "Blog.csdn.net") Req.add_header ("Referer", "http://blog.csdn.net/") </span>

Where User-agent is a browser-specific property, viewing the source code through a browser allows you to see

And then

Html=urllib2.urlopen (req) print html.read ()

You can download all the Web page code, without the problem of 403 forbidden access.

For the above problems, can be encapsulated into a function for later invocation convenient to use, the specific code:

#-*-coding:utf-8-*-Import urllib2 Import random url= "http://blog.csdn.net/qysh123/article/details/44564943" My_ headers=["mozilla/5.0 (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/39.0.2171.95 safari/537.36 "," mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) applewebkit/537.36 (khtml, like Gecko) chrome/35.0.1916.153 safari/537.36 "," mozilla/5.0 (Windows N T 6.1; WOW64; rv:30.0) gecko/20100101 firefox/30.0 "" mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) applewebkit/537.75.14 (khtml, like Gecko) version/7.0.3 safari/537.75.14 "," mozilla/5.0 (compatible ; MSIE 10.0; Windows NT 6.2; Win64; x64;    trident/6.0) "] def get_content (url,headers):" @ Get 403 Forbidden pages "' Randdom_header=random.choice (headers) Req=urllib2. Request (URL) req.add_header ("User-agent", Randdom_header) Req.add_header ("Host", "Blog.csdn.net") Req.add_header (" Referer "," http://blog.csdn.net/") Req.add_header (" GET ", url) content=urllib2.urlopen (req). Read () return content PRI NtGet_content (Url,my_headers) 

The random function is used to automatically obtain the user-agent information of the browser type that has been written, and to write out its own host,referer,get information in the custom function, so as to solve these problems, we can get a smooth access, no more 403 access information.

Of course, if the frequency of access is too fast, some sites will be filtered, to solve this need to use proxy IP method ... Specific self-addressed

"Recommended"

1. Special recommendation : "PHP Programmer Toolkit" V0.1 version download

2. Python Free video tutorial

3. Python's application in Data Science video tutorial

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.