Python web crawler Primary Implementation code

Source: Internet
Author: User
Tags python web crawler
First, let's look at a Python Crawl page library: Urllib or URLLIB2.

So what's the difference between Urllib and urllib2?
URLLIB2 can be used as a urllib amplification, the more obvious advantage is that Urllib2.urlopen () can accept the request object as a parameter, so as to control the header of the HTTP request.
The URLLIB2 library should be used as much as possible when making HTTP request, but Urllib.urlretrieve () A series of quote and unquote functions, such as functions and urllib.quote, are not included in the URLLIB2, and therefore sometimes require urllib assistance.

Urllib.open () The parameters passed here should follow some protocols, such as Http,ftp,file. For example:

Urllib.open (' http://www.baidu.com ')
Urllib.open (' file:d\python\hello.py ')

Now there is an example of downloading a picture of all GIF formats on a website. So the Python code looks like this:

Import reimport urllibdef gethtml (URL): page = urllib.urlopen (URL) html = page.read () return htmldef getimg (HTML): REG = R ' Src= ' (. *?\.gif) "' Imgre = Re.compile (reg) imglist = Re.findall (imgre,html) print imglist cnt = 1 for Imgurl in imglist:
  urllib.urlretrieve (Imgurl, '%s.jpg '%cnt)  cnt + 1if __name__ = = ' __main__ ': html = gethtml (' http://www.baidu.com ') getimg (HTML)

According to the above method, we can crawl a certain page, and then extract the data we need.

In fact, we use urllib this module to do web crawler efficiency is extremely low, let us introduce Tornado Web Server.
Tornado Web Server is a very lightweight, highly scalable and non-blocking IO-software that is written using Python, and the famous FriendFeed site is built using it. Unlike other major Web server frameworks (primarily the Python framework), Tornado uses epoll non-blocking IO, which responds quickly and handles thousands of of concurrent connections, especially for real-time Web services.

Using Tornado Web server to crawl Web pages is more efficient.
From Tornado's official website, also installs the Backports.ssl_match_hostname, the official website is as follows:

http://www.tornadoweb.org/en/stable/

Import tornado.httpclientdef Fetch (URL): Http_header = {' user-agent ': ' Chrome '} http_request = Tornado.httpclient.HTTPRequest (url=url,method= ' GET ', headers=http_header,connect_timeout=200,request_timeout= () print ' hello ' http_client = tornado.httpclient.HTTPClient () print ' Hello World ' print ' Start downloading data ... ' htt P_response = Http_client.fetch (http_request) print ' Finish downloading data ... ' print http_response.code all_fields = htt P_response.headers.get_all () for field in All_fields:  print field print http_response.bodyif __name__ = = ' __main__ ': Fetch (' http://www.baidu.com ')

Common methods of URLLIB2:

(1) info () Get header information for a webpage

(2) GetCode () Get the status code of the webpage

(3) Geturl () get the incoming URL

(4) Read () reads the contents of the file

  • Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.