Python crawler resolves 403 disable access error __python

Source: Internet
Author: User

When Python writes a reptile, Html.getcode () encounters 403 blocked access problems, which is forbidden by the Web site to the automated crawler, and to solve this problem, we need to use the Python module URLLIB2 module


URLLIB2 module is an advanced crawler crawl module, there are a lot of methods

Let's say connect url=http://blog.csdn.net/qysh123

For this connection, there is a possibility of 403 blocked access problems

To solve this problem, you need the following steps:

<span style= "FONT-SIZE:18PX;" >req = Urllib2. Request (URL)
req.add_header ("User-agent", "mozilla/5.0 Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/39.0.2171.95 safari/537.36 ")
req.add_header (' get ', url)
Req.add_header ("Host", "blog.csdn.net")
req.add_header ("Referer", "http://blog.csdn.net/") </span>

Where the user-agent is a browser-specific properties, the browser to view the source code can be found


Then Html=urllib2.urlopen (req)

Print Html.read ()

You can download the page code all down, without the 403 blocked access problem.


For the above problems, can be encapsulated into a function for later calls for easy to use, specific code:

#-*-coding:utf-8-*-Import urllib2 Import random url= "http://blog.csdn.net/qysh123/article/details/44564943" My_ headers=["mozilla/5.0 (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/39.0.2171.95 safari/537.36 "," mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) applewebkit/537.36 (khtml, like Gecko) chrome/35.0.1916.153 safari/537.36 "," mozilla/5.0 (Windows N T 6.1; WOW64; rv:30.0) gecko/20100101 firefox/30.0 "" mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) applewebkit/537.75.14 (khtml, like Gecko) version/7.0.3 safari/537.75.14 "," mozilla/5.0 (compatible ; MSIE 10.0; Windows NT 6.2; Win64; x64; trident/6.0) "] def get_content (url,headers): ' @ Get 403 blocked Web page ' Randdom_header=random.choice (headers) Req=urll Ib2. Request (URL) req.add_header ("User-agent", Randdom_header) Req.add_header ("Host", "Blog.csdn.net") Req.add_header ("

Referer "," http://blog.csdn.net/") Req.add_header (" get ", url) content=urllib2.urlopen (req). Read () return content Print get_content (UrL,my_headers)

 

It uses the random random function, automatically obtains has already written the browser type the user-agent information, in the custom function needs to write own host,referer,get information and so on, solves these several questions, may the smooth access, no longer appears 403 accesses the information.

Of course, if the frequency of access is too fast, some websites will still filter, solve this need to use proxy IP method ... Specific to solve their own

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.