How can we prevent unfriendly search engine robot spider crawlers?

Source: Internet
Author: User
How can we prevent unfriendly search engine robot spider crawlers? Today, we found that MYSQL traffic is high on the server. Then I checked the log and found an unfriendly Spider crawler. I visited the page 7 or 8 times in one second, and accessed the website's whole site receiving page. It is not listening to query the database.
I would like to ask you how to prevent such problems? Now I have static this IP address.


Reply to discussion (solution)

There is no good way. You can observe which ip addresses are frequently accessed and block them if possible.

In the program, you can set the interval between two accesses from the same ip address.

There is no good way. You can observe which ip addresses are frequently accessed and block them if possible.

In the program, you can set the interval between two accesses from the same ip address.
Xie xie, I just called discuz. I have a robots.txt that can be used to set crawler access sites. is this useful?

Today, we found that MYSQL traffic is high on the server. Then I checked the log and found an unfriendly Spider crawler. I visited the page 7 or 8 times in one second, and accessed the website's whole site receiving page. It is not listening to query the database.
I would like to ask you how to prevent such problems? Now I have static this IP address.
Learning first!

Robots.txtis generic. when you search for the engine, they will first crawl your robots.txt and crawl according to the constraints.

The key is to look at the access log to see who crawlers are looking at it. they all mark their websites on the User-Agent header.

Configure the access_log format to print out the User-Agent, restart the server, and wait for a while to re-check the access_log to see if it is a normal Crawler. you can't do it if it is not a normal Crawler. you can only restrict IP access.

Baidu's various spider names:

The product name corresponds to user-agent.
Web search baidusp
Wireless search Baiduspider-mobile
Image search Baiduspider-image
Video search Baiduspider-video
News search Baiduspider-news
Baidu souzang Baiduspider-favo
Baidu Baiduspider-cpro

This is Baidu's crawler.

Well, now I plan to write a program to control the interval of each IP address query.

Robots.txt is useful for most crawlers, but it cannot be blocked for some crawlers, such as Baidu.
We recommend that you add.

The original robots.txt is enough, but there are always people who do not stick to the rules. Therefore, we still need to record and block some IP addresses...

Of course, you can also use https...

Even though this crawler is seconds ago, do you think he will look at your robots.txt?
Baidu doesn't even look at it. let alone this type of collectors.

Determine the frequency or analyze its ua header. disable it. You're welcome.

How does a collector crawler work?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.