Chapter 2 Scrapy breaks through anti-crawler restrictions and scrapy Crawlers

Source: Internet
Author: User

Chapter 2 Scrapy breaks through anti-crawler restrictions and scrapy Crawlers
7-1 anti-crawler and anti-crawler processes and strategies

I. Basic concepts of crawlers and anti-crawlers

Ii. Anti-crawler Purpose

Iii. crawler and anti-crawler protection process

7-2 scrapy architecture source code analysis

Schematic:

When I first came into contact with scrapy, I checked this schematic, as shown in figure

Now we have a new schematic, which is more intuitive, such

After reading the source code parsing in the video, I couldn't understand it at all. Later, I had to watch the exercises for the project.

7-3 Introduction to Requests and Response

You can see scrapy document: http://scrapy-chs.readthedocs.io/zh_CN/latest/index.html to view the relevant instructions.

After simulated login, the Request will automatically pass cookies, and we do not need to add them.

7-4 ~ 5. Use downloadmiddleware to randomly Replace the user-agent

This is a template and can be used directly later.

1 # middlewares. py file 2 from fake_useragent import UserAgent # This is a random UserAgent package, which contains many UserAgent 3 class RandomUserAgentMiddleware (object): 4 def _ init _ (self, crawler ): 5 super (RandomUserAgentMiddleware, self ). _ init _ () 6 7 self. ua = UserAgent () 8 self. ua_type = crawler. settings. get ('random _ UA_TYPE ', 'random') # Read RANDOM_UA_TYPE value 9 10 @ classmethod11 def from_crawler (cls, crawler): 12 return Cls (crawler) 13 14 def process_request (self, request, spider): 15 def get_ua (): 16 ''' Gets random UA based on the type setting (random, firefox ...) '''17 return getattr (self. ua, self. ua_type) 18 19 user_agent_random = get_ua () 20 request. headers. setdefault ('user-agent', user_agent_random) # in this way, the User-Agent can be changed immediately.
1 # settings. py file 2 DOWNLOADER_MIDDLEWARES = {3 'lagou. middlewares. randomUserAgentMiddleware ': 543,4 'scrapy. downloadermiddlewares. useragent. userAgentMiddleware ': None, # Set the useragent of the original scrapy to None. Otherwise, it will be overwritten by 5} 6 RANDOM_UA_TYPE = 'random'
7-6 ~ 8 scrapy implement ip proxy pool

This is a template and can be used directly later.

1 # middlewares. py file 2 class RandomProxyMiddleware (object): 3''' Dynamic ip proxy ''' 4 def process_request (self, request, spider): 5 get_ip = GetIP () # The function here is the 6 request for passing the value ip. meta ["proxy"] = get_ip 7 # Example 8 # get_ip = GetIP () # The function here is to pass the value of ip 9 # request. meta ["proxy"] = 'HTTP: // 110.73.54.0: 8123 '10 11 12 # settings. py file 13 DOWNLOADER_MIDDLEWARES = {14 'lagou. middlewares. randomProxyMiddleware ': 542,15 'lagou. middlewares. randomUserAgentMiddleware ': 543,16' scrapy. downloadermiddlewares. useragent. userAgentMiddleware ': None, # Set the useragent of the original scrapy to None. Otherwise, 17 is overwritten}

1. Retrieve random records in the SQL language:Here, we randomly retrieve a record consisting of an ip address and a port proxy IP address.

1 select ip,port from proxy_ip2 order by rand()3 limit 1

2. Use the xpath selector:

You can use selector in scrapy. The Code is as follows:

1 from scrapy.selector import Selector2 html=requests.get(url)3 Selector=Selector(text=html.text)4 Selector.xpath()

3. if _ name _ = "_ main _"

If this is not available, the following command is run by default during the call.

1  if __name__ == "__main__":2      get_ip=GetIp()3      get_ip.get_random_ip()
7-9 cloud CAPTCHA human bypass for verification code recognition

Verification Code Recognition Method

7-10 cookie disabling, automatic speed limit, custom spider settings

If you do not use cookies, do not let the recipient know your cookies-settings ---COOKIES_ENABLED = False

The parameters in the Custom setting can be written as follows:

1 # In the spider. py file, 2 custom_settings = {3 "COOKIES_ENABLED": True, 4 "": "", 5 "": "", 6}

Author: Jin Xiao

Source: http://www.cnblogs.com/jinxiao-pu/p/6762636.html

The copyright of this article is shared by the author and the blog. You are welcome to repost this article, but you must keep this statement without the author's consent and provide a connection to the original article on the article page.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.