1, what is the web crawler
Web crawler is a modern search engine technology is a very core, basic technology, the network is like a spider web, web crawler is a spider, in the network ' crawling ', search for useful information.
2, crawl proxy server network crawler
This article describes the implementation of a crawl proxy server with Python web crawler, the main steps are:
1) using URLLIB2 to obtain the information of the Web page providing the Proxy service (take http://www.cnproxy.com/proxy1.html as an example)
2 The use of regular expressions to obtain proxy IP information
3 Using multithreading technology to verify the effectiveness of proxy IP
1), crawl proxy IP list
Def get_proxy_list (): "' Http://www.cnproxy.com/proxy1.html http://www.cnproxy.com/proxy2.html HT Tp://www.cnproxy.com/proxy3.html ' ' portdicts = {' Z ': ' 3 ", ' m ':" 4 ", ' A ':" 2 ", ' L ':" 9 ", ' F ':" 0 ", ' B ':" 5 ", ' I ':" 7 ", ' W ':" 6 " , ' x ': ' 8 ', ' C ': ' 1 '} proxylist = [] P=re.compile (R ' "' <tr><td> (. +?) <script type=text/javascript>document.write\ (":" \+ (. +?) \) </SCRIPT></td><td> (. +?) </td><td>.+?</td><td> (. +?)
</td></tr> ' to I in Range (1,4): target = R ' http://www.cnproxy.com/proxy%d.html '%i req = Urllib2.urlopen (target) result = Req.read () match = P.findall (result) for row in M
ATCH:IP = row[0] Port =row[1] Port = map (lambda x:portdicts[x],port.split (' + ')) Port = '. Join (port) Agent = row[2] Addr = Row[3].decode ("cp936"). Encode ("Utf-8 ") ProXYlist.append ([ip,port,agent,addr]) return proxylist
First, the URLLIB2 module is used to obtain the Web page information, then the RE module is used to match the proxy server information, and all the captured proxy server information is deposited into the proxylist and returned.