Reprint Please note: @ small Wuyi http://www.cnblogs.com/xiaowuyi
6.1 simplest Crawler
Web Crawlers automatically extract web pages.ProgramIt is an important component of a search engine. Python's urllib \ urllib2 and other modules can easily implement this function. The following example shows how to download the Baidu homepage. DetailsCodeAs follows:
ImportUrllib2page= Urllib2.urlopen ("Http://www.baidu.com")PrintPage. Read ()
6.2 submit form data
(1) Use the get method to submit data
The get Method for submitting a form is to encode the form data to the URL. Add a question mark to the page that gives the request, followed by the elements of the form. For example, search Baidu for "Mayi NLP" to get the URL for http://www.baidu.com/s? WD = % E9 % A9 % AC % E4 % BC % 8A % E7 % 90% 8d & Pn = 100 & Rn = 20 & Ie = UTF-8 & USM = 4 & rsv_page = 1. Where? It is followed by a form element. WD = % E9 % A9 % AC % E4 % BC % 8A % E7 % 90% 8d indicates that the search term is "Ma Yi ", pn indicates that the display starts from the page where 100th pieces of information are located. (I tried it several times. When I wrote 100, it was displayed from the page where it was located, but if I wrote 10, is displayed on the 1st page), Rn = 20 indicates that 20 entries are displayed on each page, ie = UTF-8 indicates the encoding format, USM = 4 does not understand what it means, I tried it for 1, 2, and 3, but no changes were found. rsv_page = 1 indicates the page number. If you want to download the above page, you can simply use the above URL for extraction. Code example:
ImportUrllib2keyword= Urllib. Quote ('MayI NLP') Page= Urllib2.urlopen ("Http://www.baidu.com? WD ="+ Keyword +"& Pn = 100 & Rn = 20 & Ie = UTF-8 & USM = 4 & rsv_page = 1")PrintPage. Read ()
(2) Submit using the POST method
In the get method, the data is added to the URL. This method requires little data size. If you need to exchange a large amount of data, the POST method is a good method. The previous blog "Python simulated 163 login to get mail list" is used as an example. The specific code is not listed. For details, see http://www.cnblogs.com/xiaowuyi/archive/2012/05/21/2511428.html.
6.3 introduction to urllib, urllib2, httplib, and Mechanic
6.3.1urllib module (reference: http://my.oschina.net/duhaizhang/blog/68893)
The urllib module provides interfaces that allow us to read WWW and FTP data just like accessing local files. The two most important functions in the module are urlopen () and urlretrieve ().
Urllib. urlopen (URL [, data [, proxies]):
This function creates a class file object that represents a remote URL, and then operates on this class file object like a local file to obtain remote data. The parameter URL indicates the path of remote data, generally the URL. The parameter data indicates the data submitted to the URL in post mode. The parameter proxies is used to set the proxy. Urlopen returns a class object, which provides the following methods:
Read (), Readline (), readlines (), fileno (), close (): these methods are used in the same way as file objects;
Info (): returns an httplib. httpmessage object, indicating the header information returned by the remote server;
Getcode (): return the HTTP status code. For an HTTP request, 200 indicates that the request is successfully completed; 404 indicates that the URL is not found;
Geturl (): return the request URL;
# ! /Usr/bin/ENV Python # Coding = UTF-8 Import Urllibcontent = Urllib. urlopen ( " Http://www.baidu.com " ) Print " HTTP header: " , Content.info () Print " HTTP status: " , Content. getcode () Print " URL: " , Content. geturl () Print " Content: " For Line In Content. readlines (): Print Line
Urllib. urlretrieve (URL [, filename [, reporthook [, data]):
The urlretrieve method directly downloads remote data to the local device. The filename parameter specifies the path to be saved locally (if this parameter is not specified, urllib generates a temporary file to store data). The reporthook parameter is a callback function, this callback is triggered when the server is connected and the corresponding data block is transferred (that is, the callback function is called every download ). We can use this callback function to display the current download progress or speed limit. The following example shows the download progress. The parameter data refers to the data that is post to the server. This method returns a tuple (filename, headers) containing two elements. filename indicates the path saved to the local directory, and header indicates the response header of the server.
# ! /Usr/bin/ENV Python # Coding: UTF-8 """ Download the file and display the download progress """ Import Urllib Def Downcall (count, size, total_filesize ): """ Count indicates the number of downloaded data blocks, size indicates the size of the data blocks, and total_filesize indicates the total size of the files. """ Per = 100.0 * count * size/ Total_filesize If Per & gt; 100 : Per = 100 Print " Already download % d KB (%. 2f " % (Count * size/1024, per) + " %) " URL = " Http://www.research.rutgers.edu /~ Rohanf/lp133 " Localfilepath = R " C: \ Users \ Administrator \ Desktop \ downloadparts " Urllib. urlretrieve (URL, localfilepath, downcall)
Urllib also provides some auxiliary methods for URL encoding and decoding. There are no special symbols in the URL, and some symbols have special purposes. We know that when we submit data in get mode, a string such as key = value will be added to the URL, so '=' is not allowed in value ', therefore, it must be encoded. When the server receives these parameters, it must be decoded and restored to the original data. These auxiliary methods are useful at this time:
Urllib. Quote (string [, safe]): encode the string. The safe parameter specifies characters that do not require encoding;
Urllib. Unquote (string): decodes a string;
Urllib. quote_plus (string [, safe]): similar to urllib. Quote, but this method replaces ''with '+', while '% 20' is used for quote''
Urllib. unquote_plus (string): decodes a string;
Urllib. urlencode (Query [, doseq]): converts a list of dict or tuples containing two elements into URL parameters. For example, if the dictionary {'name': 'Dark-bull ', 'age': 200} will be converted to "name = Dark-bull & age = 200"
Urllib. pathname2url (PATH): converts a local path to a URL path;
Urllib. url2pathname (PATH): converts the URL path to the local path;
6.3.2 urllib2 module (reference: http://hankjin.blog.163.com/blog/static/3373193720105140583594)
There are three main ways to access web pages using Python: urllib, urllib2, httplib
Urllib is relatively simple and has relatively weak functions. httplib is simple and powerful, but does not seem to support session
(1) simplest page access
Res = urllib2.urlopen (URL)
Print res. Read ()
(2) Add the data to get or post
Data = {"name": "Hank", "passwd": "hjz "}
Urllib2.urlopen (URL, urllib. urlencode (data ))
(3) add an HTTP Header
Header = {"User-Agent": "Mozilla-Firefox5.0 "}
Urllib2.urlopen (URL, urllib. urlencode (data), header)
Use opener and handler
opener = urllib2.build _ opener (handler)
urllib2.install _ opener (opener)
(4) add session
cj = cookielib. cookiejar ()
cjhandler = urllib2.httpcookieprocessor (CJ)
opener = urllib2.build _ opener (cjhandler)
urllib2.install _ opener (opener)
(5) add Basic Authentication
password_mgr = urllib2.httppasswordmgrwithdefaultrealm ()
top_level_url = "http://www.163.com/"
password_mgr.add_password (none, top_level_url, username, password)
handler = urllib2.httpbasicauthhandler (password_mgr)
opener = urllib2.build _ opener (handler)
urllib2.install _ opener (opener)
(6) use proxy
proxy_support = urllib2.proxyhandler ({"HTTP": "http: // 1.2.3.4: 3128/"})
opener = urllib2.build _ opener (proxy_support)
urllib2.install _ opener (opener)
(7) set timeout
socket. setdefatimetimeout (5)
6.3.3 httplib module (derived from: http://hi.baidu.com/avengert/item/be5daec8517b12ddee183b81)
httplib is a client implementation of the HTTP protocol in Python that can be used to interact with the HTTP server. Httplib does not have a lot of content and is also relatively simple. The following is a simple example: Use httplib to obtain the HTML of the Google homepage:
#Coding = GBKImportHttplib Conn= Httplib. httpconnection ("Www.google.cn") Conn. Request ('Get','/')PrintConn. getresponse (). Read () Conn. Close ()
The following describes the common types and methods provided by httplib.
Httplib. httpconnection (host [, Port [, strict [, timeout])
The constructor of the httpconnection class, indicating an interaction with the server, that is, a request/response. The host parameter indicates the server host, for example, www.csdn.net; port indicates the port number and the default value is 80; the default value of the strict parameter is false, indicating that the status line returned by the server cannot be parsed) (typical status rows such as HTTP/1.0 200 OK): whether to throw a badstatusline exception. The optional parameter timeout indicates the timeout time.
Methods provided by httpconnection:
Httpconnection. Request (method, URL [, body [, headers])
When the request method is called, a request is sent to the server. The method indicates the request method. Common methods include get and post. The URL indicates the URL of the requested resource. The body indicates the data submitted to the server, it must be a string (if the method is "Post", the body can be understood as the data in the HTML form); headers indicates the HTTP header of the request.
Httpconnection. getresponse ()
Obtain the HTTP response. The returned object is an instance of httpresponse. The following describes httpresponse.
Httpconnection. Connect ()
Connect to the HTTP server.
Httpconnection. Close ()
Close the connection to the server.
Httpconnection. set_debuglevel (level)
Set the height level. The default value of the parameter level is 0, indicating that no debugging information is output.
Httplib. httpresponse
Httpresponse indicates the server's response to the client request. It is often created by calling httpconnection. getresponse (). It has the following methods and attributes:
Httpresponse. Read ([AMT])
Obtain the Response Message Body. If the request is a common webpage, the method returns the HTML of the webpage. The optional parameter AMT indicates that the specified bytes of data are read from the response stream.
Httpresponse. getheader (name [, default])
Obtain the response header. Name indicates the header field name. The optional parameter default is returned as the default value if the header domain name does not exist.
Httpresponse. getheaders ()
Returns all header information in the form of a list.
Httpresponse. msg
Obtain all response header information.
Httpresponse. Version
Obtain the HTTP protocol version used by the server. 11 indicates HTTP/1.1; 10 indicates HTTP/1.0.
Httpresponse. Status
Obtain the status code of the response. For example, 200 indicates that the request is successful.
Httpresponse. Reason
Returns the description of the server processing request. Generally "OK"
The following is an example to familiarize yourself with the methods in httpresponse:
# Coding = GBK Import Httplib Conn = Httplib. httpconnection ( " Www.g.cn " , 80 , False) Conn. Request ( ' Get ' , ' / ' , Headers = { " Host " : " Www.google.cn " , " User-Agent " : " Mozilla/5.0 (windows; U; Windows NT 5.1; ZH-CN; RV: 1.9.1) Gecko/20090624 Firefox/3.5 " , " Accept " : " Text/plain " }) Res = Conn. getresponse () Print ' Version: ' , Res. Version Print ' Reason: ' , Res. Reason Print ' Status: ' , Res. Status Print ' MSG: ' , Res. msg Print ' Headers: ' , Res. getheaders () # Html # Print '\ n' +'-'* 50 +' \ N' # Print res. Read () Conn. Close ()
The httplib module also defines many constants, such:
The value of httplib. http_port is 80, indicating that the default port number is 80;
The value of httplib. OK is 200, indicating that the request is successfully returned;
The value of httplib. not_found is 40, indicating that the requested resource does not exist;
You can use httplib. Responses to query the meanings of related variables, such:
Print httplib. Responses [httplib. not_found]
6.3.4 machize
There is no complete introduction to the Mechanism. I wrote a simple example as follows.
# -*-Coding: cp936 -*- Import Time, string Import Machize, urllib From Machize Import Browserurlname = Urllib. Quote ( ' MayI NLP ' ) Br = Browser () Br. set_handle_robots (false) # # Ignore the robots.txt Urlhttp = r ' Http://www.baidu.com? ' + Urlname + " & Pn = 10 & Rn = 20 & Ie = UTF-8 & USM = 4 & rsv_page = 1 " Response = BR. Open (urlhttp) filename = ' Temp.html ' F = Open (filename, ' W ' ) F. Write (response. Read () F. Close ()