As a webmaster, I want to know whether my website Baidu Spider and other search engine crawlers have crawled articles on a website every day. Generally, the webmaster does not know how to use tools to query and can also view the logs in the space, but the log record in the space is all code. you don't know that it is the path of the search engine crawler. so let's share a code written in php to retrieve crawling records of various search spider.
The following search engines are supported:
Record the crawling websites of Baidu, Google, Bing, Yahoo, Soso, Sogou, and Yodao!
The php code is as follows:
The code is as follows:
Function get_naps_bot ()
{
$ Useragent = strtolower ($ _ SERVER ['http _ USER_AGENT ']);
If (strpos ($ useragent, 'googlebot ')! = False ){
Return 'Google ';
}
If (strpos ($ useragent, 'baidider Ider ')! = False ){
Return 'baidu ';
}
If (strpos ($ useragent, 'msnbot ')! = False ){
Return 'bin ';
}
If (strpos ($ useragent, 'slurp ')! = False ){
Return 'Yahoo ';
}
If (strpos ($ useragent, 'sosospider ')! = False ){
Return 'sososo ';
}
If (strpos ($ useragent, 'sogou spider ')! = False ){
Return 'sogou ';
}
If (strpos ($ useragent, 'yodaobot ')! = False ){
Return 'yodao ';
}
Return false;
}
Function nowtime (){
$ Date = date ("Y-m-d.G: I: s ");
Return $ date;
}
$ Searchbot = get_naps_bot ();
If ($ searchbot ){
$ Tlc_thispage = addslashes ($ _ SERVER ['http _ USER_AGENT ']);
$ Url = $ _ SERVER ['http _ referer'];
$ File = "www.jb51.net.txt ";
$ Time = nowtime ();
$ Data = fopen ($ file, "");
Fwrite ($ data, "Time: $ time robot: $ searchbot URL: $ tlc_thispage \ n ");
Fclose ($ data );
}
// Collect http://www.jb51.net
?>