Using Python crawler proxy IP to quickly increase the amount of blog reading _python

Source: Internet
Author: User
Tags anonymous sleep

It's written in front.

The topic is not the goal, mainly for more detailed understanding of the site's reverse climbing mechanism, if you really want to improve the amount of reading blog, high-quality content is essential.

Learn about the Web site's anti-crawling mechanism

General Web sites from the following several aspects of the anti-reptile:

1. Through headers anti-reptile

The headers anti-crawler from the user request is the most common anti-reptile strategy. Many sites will be headers user-agent to detect, there are a number of sites will be detected referer (some resources of the site's anti-theft chain is to detect referer).

If you encounter such an anti-reptile mechanism, you can add headers directly to the crawler, copy the browser's user-agent to the headers of the crawler, or modify the Referer value to the target site domain name. For detecting headers, modifying or adding headers in a reptile can be a good bypass.

2. Anti-crawler based on user behavior

There are also a number of sites that detect user behavior, such as the same IP for a short period of time to access the same page, or the same account for a short period of time to do the same operation.

Most sites are the former, and in this case, IP proxies can be used to solve them. We can save agent IP detection in the file, but this method is not desirable, the possibility of proxy IP failure is very high, so from the dedicated agent IP site real-time crawl, is a good choice.

For the second case, you can make the next request randomly spaced a few seconds after each request. Some Web sites with logical vulnerabilities can be requested several times, log off, log on again, and continue with the request to bypass the same account for a short period of time without limiting the same request.

And for cookies, by checking cookies to determine whether the user is a valid user, need to log on to the site often use this technology. Further further, some Web site logins will dynamically update the validation, such as a push-cool login, and the authenticity_token,authenticity_token that are randomly assigned for logon verification will be sent back to the server along with the user's submitted login and password.

3. Anti-crawler based on dynamic page

Sometimes the target page crawled down, found that the key information content blank, only frame code, this is because the site's information is through the user post XHR dynamic return content information, the solution to this problem is through the developer tool (Firebug, etc.) on the site flow analysis, Find individual content Information request (such as JSON), crawl content information, get what you need.

More complex and dynamic request encryption, parameters can not be resolved, can not be crawled. In this case, you can through the Mechanize,selenium RC, call the browser kernel, like the real use of the browser to crawl the Internet, you can maximize the success of the crawl, but the efficiency will be discount. The author has tested, using Urllib grab pull Hook Network recruitment information 30 pages required time is more than 30 seconds, and with the simulation browser kernel crawl needs 2-3 minutes.

4. Restrict certain IP access

Free proxy IP can be obtained from a number of Web sites, since the crawler can use these proxy IP Web site crawl, the site can also use these proxy IP reverse restrictions, by crawling these IP saved in the server to limit the use of proxy IP crawl crawler.

Get to the point.

OK, now actually, write a crawler through the proxy IP access site.

First get the proxy IP, used to crawl.

Def get_proxy_ip ():
 headers = {
 ' Host ': ' www.xicidaili.com ', '
 user-agent ': ' Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0) ',
 ' Accept ': R ' Application/json, Text/javascript, */*; q=0.01 ',
 ' Referer ': R ' http:// www.xicidaili.com/', 
 }
 req = Request. Request (R ' http://www.xicidaili.com/nn/', headers=headers) #发布代理IP的网站
 response = Request.urlopen (req)
 HTML = Response.read (). Decode (' Utf-8 ')
 proxy_list = []
 ip_list = Re.findall (R ' \d+\.\d+\.\d+\.\d+ ', HTML)
 Port_list = Re.findall (R ' <td>\d+</td> ', html) for
 I in range (len (ip_list)):
 IP = ip_list[i]
 Port = re.sub (R ' <td>|</td> ', ', Port_list[i])
 proxy = '%s:%s '% (ip,port)
 Proxy_ List.append (proxy) return
 proxy_list

Incidentally, some sites will be checked by the real IP agent IP to limit the crawler crawl. Here is a little mention of IP agent knowledge.

Proxy IP in the "Transparent" "anonymous" "High Hide" is refers to?

A transparent proxy means that the client does not need to know the presence of a proxy server at all, but it still transmits the real IP. With transparent IP, you cannot bypass the limit of the number of IP accesses over a given period of time.

Ordinary anonymous agents can hide the client's real IP, but will change our request information, the server may think we use the agent. However, when using this kind of proxy, although the visited website does not know your IP address, but still can know you are using the proxy, such IP will be banned by the website.

The high anonymous proxy does not change the client's request, so that the server seems to have a real client browser to access it, when the customer's real IP is hidden, the site will not think we use the agent.

To sum up, the Reptile agent IP is best to use "High Hide IP"

User_agent_list contains the Requestheaders user-agent that are currently requested by the mainstream browser, and it allows us to mimic the requests of various browsers.

User_agent_list = [' mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) ' chrome/45.0.2454.85 safari/537.36 ', ' 115browser/6.0.3 (Mac Intosh; U Intel Mac OS X 10_6_8; En-US) applewebkit/534.50 (khtml, like Gecko) version/5.1 safari/534.50 ', ' mozilla/5.0 (Windows; U Windows NT 6.1; En-US) applewebkit/534.50 (khtml, like Gecko) version/5.1 safari/534.50 ', ' mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; trident/4.0) ', ' mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0) ', ' mozilla/5.0 (Windows NT 6.1; rv:2.0.1) gecko/20100101 firefox/4.0.1 ', ' opera/9.80 (Windows NT 6.1; U EN) presto/2.8.131 version/11.11 ', ' mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) applewebkit/535.11 (khtml, like Gecko) chrome/17.0.963.56 safari/535.11 ', ' mozilla/4.0 (compatible ; MSIE 7.0; Windows NT 5.1; trident/4.0; SE 2.X METASR 1.0; SE 2.X METASR 1.0;. NET CLR 2.0.50727; SE 2.X METASR 1.0) ', ' mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; trident/5.0 ', ' Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) gecko/20100101 firefox/4.0.1 ',] 

By setting a random wait time to access a Web site, you can bypass certain Web sites for restrictions on request intervals.

 def proxy_read (Proxy_list, User_agent_list, i): Proxy_ip = proxy_list[i] Print (' Current agent ip:% S '%proxy_ip) user_agent = Random.choice (user_agent_list) print (' Current agent user_agent:%s '%user_agent) Sleep_time =  Random.randint (1,3) print (' Wait time:%s S '%sleep_time) time.sleep (sleep_time) #设置随机等待时间 print (' Get started ') headers = {' Host ': ' S9-im-notify.csdn.net ', ' Origin ': ' http://blog.csdn.net ', ' user-agent ': user_agent, ' Accept ': R ' Application/json, Text/javascript, */*; q=0.01 ', ' Referer ': R ' http://blog.csdn.net/u010620031/article/details/51068703 ',} proxy_support = Request. Proxyhandler ({' http ':p roxy_ip}) opener = Request.build_opener (Proxy_support) Request.install_opener (opener) req = Request. Request (R ' http://blog.csdn.net/u010620031/article/details/51068703 ', headers=headers) try:html = Request.urlopen ( REQ). Read (). Decode (' Utf-8 ') except Exception as E:print (' Open failure! ') Else:global count Count +=1 print (' ok! total successfully%s times! '%count ' 

The above is the crawler to use the relevant knowledge of the agent, although still very simple, but most of the scene can be dealt with.

Attached source code

#! /usr/bin/env Python3 from urllib Import request import random import time import lxml import re user_agent_list = [' Mozi lla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) ' chrome/45.0.2454.85 safari/537.36 ', ' 115browser/6.0.3 (Mac Intosh; U Intel Mac OS X 10_6_8; En-US) applewebkit/534.50 (khtml, like Gecko) version/5.1 safari/534.50 ', ' mozilla/5.0 (Windows; U Windows NT 6.1; En-US) applewebkit/534.50 (khtml, like Gecko) version/5.1 safari/534.50 ', ' mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; trident/4.0) ', ' mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0) ', ' mozilla/5.0 (Windows NT 6.1; rv:2.0.1) gecko/20100101 firefox/4.0.1 ', ' opera/9.80 (Windows NT 6.1; U EN) presto/2.8.131 version/11.11 ', ' mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) applewebkit/535.11 (khtml, like Gecko) chrome/17.0.963.56 safari/535.11 ', ' mozilla/4.0 (compatible ; MSIE 7.0; Windows NT 5.1; trident/4.0; SE 2.X METASR 1.0; SE 2.X METASR 1.0;. NET CLR 2.0.50727; SE 2.X METASR 1.0) ', ' mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; trident/5.0 ', ' mozilla/5.0 (Windows NT 6.1; rv:2.0.1) gecko/20100101 firefox/4.0.1 ',] count = 0 def get_proxy_ip (): Hea DERs = {' Host ': ' www.xicidaili.com ', ' user-agent ': ' mozilla/4.0 ' (compatible; MSIE 7.0; Windows NT 6.0) ', ' Accept ': R ' Application/json, Text/javascript, */*; q=0.01 ', ' Referer ': R ' http://www.xicidaili.com/',} req = Request. Request (R ' http://www.xicidaili.com/nn/', headers=headers) response = Request.urlopen (req) HTML = Response.read (). Decode (' utf-8 ') proxy_list = [] Ip_list = Re.findall (R ' \d+\.\d+\.\d+\.\d+ ', html) port_list = Re.findall (R ' <td>\d+ </td> ', HTML) for I in Range (len (ip_list)): IP = ip_list[i] Port = re.sub (R ' <td>|</td> ', ', ', port_list [i]) proxy = '%s:%s '% (ip,port) proxy_list.append (proxy) return proxy_list def proxy_read (Proxy_list, User_agent_list, i): Proxy_ip = proxy_list[i] Print (' Current agent ip:%s '%proxy_ip) user_agent = Random.choice(user_agent_list) print (' Current agent user_agent:%s '%user_agent) sleep_time = Random.randint (1,3) print (' Wait time:%s s '%sleep_ Time) Time.sleep (sleep_time) print (' Get started ') headers = {' Host ': ' s9-im-notify.csdn.net ', ' Origin ': ' Http://blog.csdn.ne T ', ' user-agent ': user_agent, ' Accept ': R ' Application/json, Text/javascript, */*; q=0.01 ', ' Referer ': R ' http://blog.csdn.net/u010620031/article/details/51068703 ',} proxy_support = Request. Proxyhandler ({' http ':p roxy_ip}) opener = Request.build_opener (Proxy_support) Request.install_opener (opener) req = Request. Request (R ' http://blog.csdn.net/u010620031/article/details/51068703 ', headers=headers) try:html = Request.urlopen ( REQ). Read (). Decode (' Utf-8 ') except Exception as E:print (' Open failure! ') Else:global count Count +=1 print (' ok! total successfully%s times! '%count ' if __name__ = = ' __main__ ': proxy_list = get_proxy_ip () for I in range (MB): Proxy_read (Proxy_list, User_agent_
 list, i)

Above is the entire content of this article, I hope the content of this article for everyone's study or work can bring some help, but also hope that a lot of support cloud Habitat community!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.