python自動擷取代理列表並通過代理爬取網站

來源:互聯網
上載者:User

本指令碼實現了如下功能:

1:自動從某代理網站 擷取最新的可用代理資訊:IP地址,連接埠,協議類型(網站對每分鐘的調用此書做了限制)

2:自動填滿代理資訊並進行抓去網站資訊

注意:X

 

 代碼如下 複製代碼


# -*- coding: utf8 -*-
'''

'''
 
import urllib2
import urllib
import cookielib
import hashlib
import re
import time
import json
from pip._vendor.distlib._backport.tarfile import TUREAD
 
class Spide:
    def __init__(self,proxy_ip,proxy_type,proxy_port,use_proxy=False):
        print 'using the proxy info :',proxy_ip
        self.proxy = urllib2.ProxyHandler({proxy_type: proxy_ip+":"+proxy_port})
        self.usercode = ""
        self.userid = ""
        self.cj = cookielib.LWPCookieJar();
       
        self.opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(self.cj));
        if use_proxy:
            self.opener = urllib2.build_opener(self.proxy)
        urllib2.install_opener(self.opener);
 
    #擷取代理列表
    def get_proxy(self):
        proxy_info_json = ""
        #first get the proxy info from
        try:
            reqRequest_proxy =  urllib2.Request('http://gXiXmXmXeXpXrXoXxXy.com/api/getProxy');
            reqRequest_proxy.add_header('Accept','*/*');
            reqRequest_proxy.add_header('Accept-Language','zh-CN,zh;q=0.8');
            reqRequest_proxy.add_header('User-Agent','Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.72 Safari/537.36');
            reqRequest_proxy.add_header('Content-Type','application/x-www-form-urlencoded');
           
           
            proxy_info = urllib2.urlopen(reqRequest_proxy).read();
            print proxy_info
            proxy_info_json = json.loads(proxy_info)
            return_str=proxy_info_json['protocol']+":"+proxy_info_json['ip']+proxy_info_json['port']
        except Exception,e:    
            print 'proxy have problem'
            #print proxy_info_json['protocol']
            #print proxy_info_json['ip']
            #print proxy_info_json['port']
        return proxy_info_json
 
        #print proxy_info
    def chrome(self):
        try:
           
            reqRequest =  urllib2.Request('http://www.503error.com');
            reqRequest.add_header('Accept','*/*');
            reqRequest.add_header('Accept-Language','zh-CN,zh;q=0.8');
            reqRequest.add_header('User-Agent','Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.72 Safari/537.36');
            reqRequest.add_header('Content-Type','application/x-www-form-urlencoded');
            content = urllib2.urlopen(reqRequest).read();
        except Exception,e:
            print 'oppps'
       
       
       
        print 'done'
 
 
 
if __name__ == "__main__":
 
    for count in range(100):
        print '################################:',count
        print 'Geting the new proxy info:'
        test = Spide(proxy_ip='test',proxy_type='http',proxy_port='3128',use_proxy=False)
        proxy_list = test.get_proxy()
        #print proxy_list
       
        print 'start to chrome'
        spide1 = Spide(proxy_ip=proxy_list['ip'],proxy_type=proxy_list['protocol'],proxy_port=proxy_list['port'],use_proxy=True)
        spide1.chrome()
        time.sleep(5)
 

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.