Design and analysis of web crawlers in search engines-search engine technology

Source: Internet
Author: User
Tags time interval
1] High configuration of web crawlers.
2) Web crawlers can parse links on captured webpages.
3. Web crawlers have simple storage configurations.
4. Web crawlers can intelligently update and analyze web pages.
5) high efficiency of Web crawlers
How to Design crawlers based on the characteristics? What steps do you need to pay attention?
1] url traversal and record
This larbin is doing very well. In fact, url traversal is very simple, for example:

Cat [what you got] | tr "n | gawk '{print $2}' | pcregrep ^ http ://

You can get a list of URLs.
2) multi-process VS multi-thread
Each has its own advantages. Now, a common PC such as booso.com can easily crawl 5 GB of data in a day. About 0.2 million web pages.
3) time update control
The most silly way is to update the weight without time, crawl one by one, and then crawl one by one.
The data crawled next time is usually compared with the previous one. If the data is not changed for five consecutive times, the time interval for crawling the webpage is doubled.
If a web page is updated when it is crawled for five consecutive times, the set crawling time is shortened to 1/2 of the original time.
Note that efficiency is one of the keys to winning.
4] What is the crawling depth?
Check the situation. If you have tens of thousands of servers as web crawlers, I suggest you skip this step.
If you only have one server as I do for web crawler, you should know the following statistics:
Webpage depth: Number of webpages: webpage importance
0: 1: 10
1: 20: 8
2: 600: 5
3: 2000: 2
4 above: 6000: generally, it cannot be calculated.
Well, it's almost the same as three levels. First, the data size has increased by 3/4 times, and second, the importance has indeed declined a lot. This is called "planting is a dragon species, and the harvest is a flea.
5] crawlers generally do not crawl the other party's webpage. Generally, they go out through a Proxy. This proxy can relieve the pressure, because when the other party's webpage is not updated, you only need to get the header tag, and there is no need to transmit it all once, which can greatly save network bandwidth.
The 304 recorded in apache webserver is generally cached.
6. When there is no time, wait and see robots.txt.
7] storage structure.
Everyone is wise. google uses the gfs system. If you have 7/8 servers, I suggest you use the NFS system. If you have 70/80 servers, I suggest you use the afs system, if you only have one server, you can.
A piece of code shows how the news search engine I wrote stores data:


NAME = 'echo $ URL | perl-p-e's/([^ w-. @])/$1 eq ""? "": Sprintf ("% 2.2x", ord ($1)/eg''

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.