Using Python language to implement web crawler

Source: Internet
Author: User
Tags python web crawler

1, what is the web crawler

Web crawler is a modern search engine technology is a very core, basic technology, the network is like a spider web, web crawler is a spider, in the network ' crawling ', search for useful information.

2, crawl proxy server network crawler

This article describes the implementation of a crawl proxy server with Python web crawler, the main steps are:

1) using URLLIB2 to obtain the information of the Web page providing the Proxy service (take http://www.cnproxy.com/proxy1.html as an example)

2 The use of regular expressions to obtain proxy IP information

3 Using multithreading technology to verify the effectiveness of proxy IP

1), crawl proxy IP list

Def get_proxy_list (): "' Http://www.cnproxy.com/proxy1.html http://www.cnproxy.com/proxy2.html HT Tp://www.cnproxy.com/proxy3.html ' ' portdicts = {' Z ': ' 3 ", ' m ':" 4 ", ' A ':" 2 ", ' L ':" 9 ", ' F ':" 0 ", ' B ':" 5 ", ' I ':" 7 ", ' W ':" 6 " , ' x ': ' 8 ', ' C ': ' 1 '} proxylist = [] P=re.compile (R ' "' <tr><td> (. +?) <script type=text/javascript>document.write\ (":" \+ (. +?) \) </SCRIPT></td><td> (. +?) </td><td>.+?</td><td> (. +?)  
        </td></tr> ' to I in Range (1,4): target = R ' http://www.cnproxy.com/proxy%d.html '%i req = Urllib2.urlopen (target) result = Req.read () match = P.findall (result) for row in M   
            ATCH:IP = row[0] Port =row[1] Port = map (lambda x:portdicts[x],port.split (' + ')) Port = '. Join (port) Agent = row[2] Addr = Row[3].decode ("cp936"). Encode ("Utf-8 ") ProXYlist.append ([ip,port,agent,addr]) return proxylist 

First, the URLLIB2 module is used to obtain the Web page information, then the RE module is used to match the proxy server information, and all the captured proxy server information is deposited into the proxylist and returned.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.