Scrapy crawler framework uses IP proxy pool

Source: Internet
Author: User
Tags base64
One, manually update the IP pool


method One:



1. Add the IP pool in the settings profile:


ippool=[  
    {"ipaddr": "61.129.70.131:8080"},  
    {"ipaddr": "61.152.81.193:9100"},  
    {"ipaddr": " 120.204.85.29:3128 "},  
    {" ipaddr ":" 219.228.126.86:8123 "},  
    {" ipaddr ":" 61.152.81.193:9100 "},  
    {" IPAddr " : "218.82.33.225:53853"},  
    {"ipaddr": "223.167.190.17:42789"}  
]


These IP can be obtained from this several websites: Quick agent, Agent 66, have agent, West Thorn Agent, Guobanjia. If there is a hint like this:"The connection attempt failed because the connecting party did not respond correctly after a period of time or the connected host is unresponsive" or this,"Unable to connect because the target computer was actively refused." ". that is the problem of IP, replacement on the line .... Found that many of the above IP is not available.


2017-04-16 12:38:11 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://news.sina.com.cn/> (failed 1 times): TCP connection timed out: 10060: due to connection After a period of time, the party did not reply correctly or the connected host did not respond, and the connection attempt failed. .
this is ip:182.241.58.70:51660
2017-04-16 12:38:32 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://news.sina.com.cn/> (failed 2 times): TCP connection timed out: 10060: due to connection After a period of time, the party did not reply correctly or the connected host did not respond, and the connection attempt failed. .
this is ip:49.75.59.243:28549
2017-04-16 12:38:33 [scrapy.crawler] INFO: Received SIGINT, shutting down gracefully. Send again to force
2017-04-16 12:38:33 [scrapy.core.engine] INFO: Closing spider (shutdown)
2017-04-16 12:38:50 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-16 12:38:53 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://news.sina.com.cn/> (failed 3 times): TCP connection timed out: 10060: The connection attempt failed because the connected party did not reply correctly after a period of time or the connected host did not respond. .
2017-04-16 12:38:54 [scrapy.core.scraper] ERROR: Error downloading <GET http://news.sina.com.cn/>
Traceback (most recent call last):
  File "f:\software\python36\lib\site-packages\twisted\internet\defer.py", line 1299, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "f:\software\python36\lib\site-packages\twisted\python\failure.py", line 393, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "f:\software\python36\lib\site-packages\scrapy\core\downloader\middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
twisted.internet.error.TCPTimedOutError: TCP connection timed out: 10060: The connection attempt failed because the connected party did not reply correctly after a period of time or the connected host did not respond. .


The download middleware associated with proxy server settings in Scrapy is Httpproxymiddleware, and the corresponding classes are:


Write code here


2. Modify the middleware file middlewares.py


#-*-Coding:utf-8-*-  

# Define Here the models for your spider middleware  
#  
# See documentation in:  
# HT tp://doc.scrapy.org/en/latest/topics/spider-middleware.html  

import random from  
scrapy import signals  
From youx.settings import Ippool  

class Myproxiesspidermiddleware (object):  

      def __init__ (self,ip= "):  
          Self.ip=ip  

      def process_request (self, request, spider):  
          thisip=random.choice (Ippool)  
          print ("This is IP: "+thisip[" ipaddr "])  
          request.meta[" proxy "]="/http "+thisip[" ipaddr "  


3. Set the Downloader_middlewares in Settings


Downloader_middlewares = {
#    ' youx.middlewares.MyCustomDownloaderMiddleware ': 543,
    ' Scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware ': None,
    ' Youx.middlewares.MyproxiesSpiderMiddleware ': 125


Method Two:



middlewares.py:


import base64
import random
from scrapy import signals

PROXIES = [
    {'ip_port': '61.160.233.8', 'user_pass': ''},
    {'ip_port': '125.93.149.186', 'user_pass': ''},
    {'ip_port': '58.38.86.181', 'user_pass': ''},
    {'ip_port': '119.142.86.110', 'user_pass': ''},
    {'ip_port': '124.161.16.89', 'user_pass': ''},
    {'ip_port': '61.160.233.8', 'user_pass': ''},
    {'ip_port': '101.94.131.237', 'user_pass': ''},
    {'ip_port': '219.157.162.97', 'user_pass': ''},
    {'ip_port': '61.152.89.18', 'user_pass': ''},
    {'ip_port': '139.224.132.192', 'user_pass': ''}
]
class ProxyMiddleware(object):
    def process_request(self, request, spider):
        proxy = random.choice(PROXIES)
        if proxy['user_pass'] is not None:
            request.meta['proxy'] = "http://%s" % proxy['ip_port']
            encoded_user_pass = base64.encodestring(proxy['user_pass'])
            request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
        else:
            request.meta['proxy'] = "http://%s" % proxy['ip_port']


settings.py:


Downloader_middlewares = {
#    ' youx.middlewares.MyCustomDownloaderMiddleware ': 543,
    ' Youx.middlewares.ProxyMiddleware ': The '
    scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware ' : None,
}


Original link: http://www.cnblogs.com/rwxwsblog/p/4575894.html Second, automatically update the IP pool



Here, write a class proxies.py that automatically obtains the IP, and then save the acquired IP to the TXT file by executing it:


# *-* coding:utf-8 *-*
import requests
from bs4 import BeautifulSoup
import lxml
from multiprocessing import Process, Queue
import random
import json
import time
import requests

class Proxies(object):


    """docstring for Proxies"""
    def __init__(self, page=3):
        self.proxies = []
        self.verify_pro = []
        self.page = page
        self.headers = {
        'Accept':'*/*',
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36',
        'Accept-Encoding':'gzip, deflate, sdch',
        'Accept-Language':'zh-CN,zh;q=0.8'
        }
        self.get_proxies()
        self.get_proxies_nn()

    def get_proxies(self):
        page = random.randint(1,10)
        page_stop = page + self.page
        while page <page_stop:
            url ='http://www.xicidaili.com/nt/%d'% page
            html = requests.get(url, headers=self.headers).content
            soup = BeautifulSoup(html,'lxml')
            ip_list = soup.find(id='ip_list')
            for odd in ip_list.find_all(class_='odd'):
                protocol = odd.find_all('td')[5].get_text().lower()+'://'
                self.proxies.append(protocol +':'.join([x.get_text() for x in odd.find_all('td')[1:3]]))
            page += 1

    def get_proxies_nn(self):
        page = random.randint(1,10)
        page_stop = page + self.page
        while page <page_stop:
            url ='http://www.xicidaili.com/nn/%d'% page
            html = requests.get(url, headers=self.headers).content
            soup = BeautifulSoup(html,'lxml')
            ip_list = soup.find(id='ip_list')
            for odd in ip_list.find_all(class_='odd'):
                protocol = odd.find_all('td')[5].get_text().lower() +'://'
                self.proxies.append(protocol +':'.join([x.get_text() for x in odd.find_all('td')[1:3]]))
            page += 1

    def verify_proxies(self):
        # Unverified proxy
        old_queue = Queue()
        # Verified proxy
        new_queue = Queue()
        print ('verify proxy.....')
        works = []
        for _ in range(15):
            works.append(Process(target=self.verify_one_proxy, args=(old_queue,new_queue)))
        for work in works:
            work.start()
        for proxy in self.proxies:
            old_queue.put(proxy)
        for work in works:
            old_queue.put(0)
        for work in works:
            work.join()
        self.proxies = []
        while 1:
            try:
                self.proxies.append(new_queue.get(timeout=1))
            except:
                break
        print ('verify_proxies done!')


    def verify_one_proxy(self, old_queue, new_queue):
        while 1:
            proxy = old_queue.get()
            if proxy == 0:break
            protocol ='https' if'https' in proxy else'http'
            proxies = {protocol: proxy}
            try:
                if requests.get('http://www.baidu.com', proxies=proxies, timeout=2).status_code == 200:
                    print ('success %s'% proxy)
                    new_queue.put(proxy)
            except:
                print ('fail %s'% proxy)


if __name__ =='__main__':
    a = Proxies()
    a.verify_proxies()
    print (a.proxies)
    proxie = a.proxies
    with open('proxies.txt','a') as f:
       for proxy in proxie:
             f.write(proxy+'\n')


The IP will be saved to the Proxies.txt file.



The contents of the modified proxy file middlewares.py are as follows:


import random
import scrapy
from scrapy import log


# logger = logging.getLogger()

class ProxyMiddleWare(object):
    """docstring for ProxyMiddleWare"""
    def process_request(self,request, spider):
        '''Add proxy to request object'''
        proxy = self.get_random_proxy()
        print("this is request ip:"+proxy)
        request.meta['proxy'] = proxy


    def process_response(self, request, response, spider):
        '''Handle the returned response'''
        # If the returned response status is not 200, regenerate the current request object
        if response.status != 200:
            proxy = self.get_random_proxy()
            print("this is response ip:"+proxy)
            # Add proxy to current reque
            request.meta['proxy'] = proxy
            return request
        return response

    def get_random_proxy(self):
        '''Randomly read proxy from file'''
        while 1:
            with open('G:\\Scrapy_work\\myproxies\\myproxies\\proxies.txt','r') as f:
                proxies = f.readlines()
            if proxies:
                break
            else:
                time.sleep(1)
        proxy = random.choice(proxies).strip()
        return proxy 


Modify the next settings file


Downloader_middlewares = {  
#    ' youx.middlewares.MyCustomDownloaderMiddleware ': 543,  
     ' Scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware ': None,  
     ' Youx.middlewares.ProxyMiddleWare ': ",  
     ' scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware ': None  
}


Original link: http://blog.csdn.net/u011781521/article/details/70194744


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.