Objective
believe that many webmaster, bloggers may be most concerned about is nothing more than the inclusion of their own site, under normal circumstances, we can view the space server log files to see what the search engine has crawled our pages, however, if the use of PHP code Analysis Web log Spider crawler traces, is better and more intuitive and easy to operate! Here is the sample code, the friends who need to have a look together below.
Sample code
<?php//Capture spider reptile name or anti-capture function Isspider () {$bots = array (' Google ' => ' Googlebot ', ' Baidu ' => ' baiduspider ', ' yahoo ' => ' yahoo slurp ', ' Soso ' => ' sosospider ', ' Msn ' => ' msnbot ', ' AltaVista ' => ' scooter ', ' Sogou ' => ' Sogou spider ', ' Yodao '
> ' Yodaobot ');
$userAgent = Strtolower ($_server[' http_user_agent '));
foreach ($bots as $k => $v) {if (Strstr ($v, $userAgent)) {return $k;
Break
return false;
//Get what Spider Crawler to store after the spider traces.
According to the collection time http_user_agent is empty to prevent collecting//grasping spider crawler $spi = Isspider ();
if ($SPI) {$TLC _thispage = addslashes ($_server[' http_user_agent '));
$file = ' robot.txt ';
$time = Date (' y-m-d h:i:s ', mktime ());
$handle = fopen ($file, ' A + ');
$PR = $_server[' Request_uri '];
Fwrite ($handle, "time:{$time} robot:{$spi} agent:{$tlc _thispage} url:{$PR} \n\r");
Fclose ($handle); }?>
Summarize
The above is the entire content of this article, I hope the content of this article for everyone's study or work can bring certain help, if there are questions you can message exchange.