View Internet crawler design from larbin

Source: Internet
Author: User
Reprinted: http://blog.ifeng.com/article/121656.html

The Internet is a massive unstructured database that effectively retrieves and organizes and presents data.It has a huge application prospect, especially the XML-based structure similar to RSS.More and more data is being processed, and the content organization method is becoming more and more flexible.Is more and more widely used, and the timeliness and readability are also getting higher and higher.Required. All these are based on crawlers and information sources. An efficientFlexible and scalable crawlers have irreplaceable significance for the above applications.

To design a crawler, you must first consider the efficiency. For the network/IP communication programming methods.

The first is single-thread blocking, which is the simplest and easiest to implement. An example: In shell, you can use curl, pcregrep, and other system commands.Directly implement a simple crawler, but its efficiency problems are also obvious.: Blocking reading, DNS resolution, connection establishment, and write requests, Reading results will lead to time delay in these steps, thus the service cannot be effectively used.All resources.

The second is multi-thread blocking. Create multiple blocked threads and request different URLs respectively. Compared with the first method, it can use machine resources more effectively., Especially for network resources, because countless threads are working at the same time, the network will be relatively adequateBut it also consumes a large amount of CPU resources.The impact of frequent switching on performance is worth considering.

The third is non-blocking of a single thread. This is a widely used method.It is widely used in both client and server.. Open multiple non-blocking connections in a thread and use poll/epoll/Select checks the connection status and responds to the request immediately.Not only makes full use of network resources, but also minimizes the consumption of CPU resources on the local machine.. This method requires asynchronous non-blocking operations on DNS requests, connections, and read/write operations.The first one is more complex. You can use ADNS as a solution.The following three operations can be implemented directly in the program.

After solving the efficiency problem, you need to consider the specific design problem.

A URL must be processed by a separate class, including display and URL analysis.To obtain the host, port, and file data.

Then we need to sort the URLs and a large URL hash table is required.

If you want to remove the content of a webpage, you also need a document hash table.

The crawled URL needs to be recorded. Because of the large volume, we write it to the disk.Therefore, you also need a FIFO class (as urlsdisk ).

The URL to be crawled also needs a FIFO class.The URL will be retrieved from the crawled url fifo and written to this FIFO. The running crawler needsRead data from the FIFO and add it to the host class URL list. Of courseAnd read the URL directly from the first FIFO, but the priority should be higher than thisThe URL is low. After all, it has been crawled.

Crawlers generally crawl multiple websites, but DNS requests within the same site can onlyOnce, the host name must be independent of the URL and a class must be processed separately.

After the host name resolution is complete, you need to have a resolved IP class and its application.Used for connect.

The parsing class of HTML documents must also be used to analyze webpages and retrieve the URLs in them.To urlsdisk.

With some strings and scheduling classes added, a simple crawler is basically complete.

The above is basically the design concept of larbin.There are some special processing, such as a webserverAnd processing of special files. Larbin has a poor design.That is, slow access will increase, occupying a large number of connections and needs to be improved.In addition, for large-scale crawlersTo make distributed expansion, you also need to add centralized URL management and scheduling and front-end SPIDer distributed algorithm.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.