The choice of Python crawler technology

Source: Internet
Author: User

This article is not an introductory post and requires some knowledge of the Python and crawler fields.

Crawler is another area, involving more knowledge points, not only to be familiar with web development, and sometimes involved in machine learning and other knowledge, but in Python everything becomes simple, there are many third-party libraries to help us achieve. Using Python to write a crawler first to select the appropriate crawl module, the simplest function is to be able to send and process requests, the following describes a few common ways to crawl.

One, Python's own urlib2 and Urlib or third-party modules requests 

This scheme is suitable for simple page crawlers, such as crawling the blog site recommended articles.

Urllib2 and Urllib are Python's own module can be used to send processing requests, although more flexible but the API is less easy to use, so generally have to carry out two times encapsulation, I have also carried out encapsulation, and finally found that their encapsulated module infinitely closer to the very mature requests library, So use requests directly, of course, requests only send and accept requests, for page analysis to cooperate with lxml or BeautifulSoup and other third-party libraries. The advanced point of the crawler is not just sending and processing requests, but also handling exceptions, request speed control and other issues, which requires us to write extra code to control, of course, we can write their own only a matter of time, but completely unnecessary, then I will introduce the second scenario.

Second, scrapy framework  

scrapy is the best crawler collection, the current projects I do, whether complex or not, are used scrapy , exception handling is already encapsulated for request scheduling, and third-party Scrapy-redis can also support distribution, and we focus on more important page analysis and rule writing, and the code can refer to my GitHub example.

Scrapy grasping Tiger Sniff net article: Https://github.com/world1234567/scrapy_for_huxiu

  Third, Python selenium

This way I call the ultimate must kill, generally is really no way to use, before I use a search engine to crawl articles, the search engine used by the relatively high-difficulty anti-crawler mechanism and constantly changing people can not find the law, the most typical feature is that the cookie will be randomly hidden to a page JS and pictures, The solution is to simulate the behavior of the browser to load all JS and other static resource files, if you write a browser to parse too much, if the Chrome browser can open the interface, through chrome loading way to get the content of the page, this is selenium, Selenium plus random wait time can be simulated and people very similar operation behavior, the disadvantage is slow, but the general crawler to crawl speed requirements are not high, the important is stability, this way for crawling anti-crawler mechanism to do good large-scale web site comparison applicable.

Summary, for most of the crawler demand directly with scrapy solution, if you can not solve the first or third solution, it is so simple.

The choice of Python crawler technology

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.