Block Baidu search two-level domain name restrictions

Source: Internet
Author: User

If some level two domain name is not willing to be revenue, if these two level domain name can not be accessed, that is directly to a specific domain name to do 301 Redirect to the primary domain name, or the several two-level domain names are bound to the new directory or subdirectory, and then use robots to do directory restrictions crawl.

If the level two domain name still needs to use, that will sacrifice a period of time, the domain name alone does the resolution to the new directory or the new server, then does 404, then submits the dead chain to the Baidu Stationmaster platform

There is another way of thinking, that is to use PHP to determine whether the spider access, if it is a spider access to do 301,404, or Jump can be

The following is a robots way to prohibit search engine crawling, and robots.txt naming conventions

User-agent: * Here's * represents all of the search engine types, * is a wildcard

Disallow:/admin/is defined here to prohibit crawling the directory under the admin directory

Disallow:/require/is defined here to prohibit crawling the directory below the Require directory

Disallow:/abc/is defined here to prohibit crawling the directory below the ABC directory

Disallow:/cgi-bin/*.htm prohibits access to all URLs (including subdirectories) in the/cgi-bin/directory that are suffixed with ". htm".

Disallow:/*?* prohibits access to all URLs in the site that contain a question mark (?)

Disallow:/.jpg$ prohibit crawling all pictures of the Web page. jpg format

disallow:/ab/adc.html the adc.html file under the AB folder is not allowed to crawl.

Allow:/cgi-bin/defined here is allowed to crawl directories below the Cgi-bin directory

Allow:/TMP defines the entire directory that is allowed to crawl the TMP

Allow:. htm$ only allows access to URLs that are suffixed with ". htm".

Allow:. gif$ allows crawling of web pages and GIF format pictures

Sitemap: Site Map tells the crawler this page is a sitemap

Here's an example: Disallow:/test.baidu.com

One is to use PHP to block out search engines

if(Getrobot ())//If spiders{    Header(' http/1.1 301 Moved permanently ');//Issue 301 Head    Header(' location:http://www.baidu.com ');//the address to jump to}Else{    Echo' Not spider access ';}/** * to determine if search engine spider * * @return bool*/ functionGetrobot () {$isrobot=FALSE; $kw _spiders= ' bot| crawl| Spider|slurp|sohu-search|lycos|robozilla '; $kw _browsers= ' msie| netscape| opera| konqueror| Mozilla '; if(!strexists ($_server[' Http_user_agent '], ' http//') &&Preg_match("/($kw _browsers)/I ",$_server[' Http_user_agent '])) {    } ElseIf(Preg_match("/($kw _spiders)/I ",$_server[' Http_user_agent '])) {        $isrobot=TRUE; } Else {        $isrobot=FALSE; }    return $isrobot;}functionStrexists ($string,$find) {    return! (Strpos($string,$find) ===FALSE);}

Block Baidu Search for level two domain name restrictions

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.