Objective
Take the recent discovery of a free proxy IP site for example: http://www.xicidaili.com/nn/. In the use of the time to find a lot of IP is not used.
So I wrote a script in Python that could detect the proxy IP that could be used.
1 #Encoding=utf82 ImportUrllib23 fromBs4ImportBeautifulSoup4 ImportUrllib5 ImportSocket6 7User_agent ='mozilla/5.0 (Windows NT 6.3; WOW64; rv:43.0) gecko/20100101 firefox/43.0'8Header = {}9header['user-agent'] =user_agentTen One " " A get all proxy IP addresses - " " - defGetproxyip (): theProxy = [] - forIinchRange (): - Try: -URL ='http://www.xicidaili.com/nn/'+Str (i) +req = Urllib2. Request (url,headers=header) -res =Urllib2.urlopen (req). Read () +Soup =BeautifulSoup (RES) AIPS = Soup.findall ('TR') at forXinchRange (1, Len (IPs)): -IP =Ips[x] -TDS = Ip.findall ("TD") -Ip_temp = tds[1].contents[0]+"\ t"+tds[2].contents[0] - proxy.append (ip_temp) - except: in Continue - returnProxy to + " " - Verify that the obtained proxy IP address is available the " " * defValidateip (proxy): $URL ="http://ip.chinaz.com/getip.aspx"Panax Notoginsengf = open ("E:\ip.txt","W") -Socket.setdefaulttimeout (3) the forIinchRange (0,len (proxy)): + Try: Aip = Proxy[i].strip (). Split ("\ t") theProxy_host ="/ http"+ip[0]+":"+ip[1] +Proxy_temp = {"http":p Roxy_host} -res = Urllib.urlopen (url,proxies=proxy_temp). Read () $F.write (proxy[i]+'\ n') $ PrintProxy[i] - exceptexception,e: - Continue the f.close () - Wuyi the if __name__=='__main__': -Proxy =Getproxyip () WuVALIDATEIP (proxy)
Summarize
This is just the IP address of the first page crawled, and you can crawl a few more pages if you need to. At the same time, the site is always updated, it is recommended to crawl only the first few pages.
Use Python to crawl available proxy IPs