Use Curl to crawl other sites can normally catch back, but for http://www.fblife.com/this site is powerless, always catch back 16K size will no longer return data, but the HTTP status return code is still 200, seek expert advice
Reply to discussion (solution)
Amitabha, the donor, if the basic programming has not been able to solve, sent here is the same.
Set_time_limit (0);
Set_time_limit (0);
That's not the reason, is it?
@curl_setopt ($ch, Curlopt_useragent, "mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.0) ");
Stick your code out,
Me this way:
Return is normal, file information 165.56K
This in the browser to access this page is OK, the key to this PHP I ran in the background, why not run in the background?
Stick your code out,
Me this way:
PHP Code
$h =curl_init ();
curl_setopt ($h, Curlopt_url, $u);
$s =curl_exec ($h);
Curl_close ($h);
Echo $s;
?>
Return is normal, file information 165 ...
There's nothing you can do about this site, http://www.ydtuiguang.com/.
It's okay without curl.
Set_time_limit (0);
Var_dump (file_get_contents ("http://www.ydtuiguang.com/"));
I can do that. Did you have an error?
There's nothing you can do about this site, http://www.ydtuiguang.com/.
Today, try again, save this code as a fblife.php file, and then execute PHP fblife.php under the Windows command line to output normally, but in the Linux environment, run PHP fblife.php, the same command can only be part of the order, who knows if this is a Linux system problem or other problems?
Moreover, Linux wget "http://www.fblife.com/" to execute this command can only be part of the
$u = "http://www.fblife.com/";
$h =curl_init ();
curl_setopt ($h, Curlopt_url, $u);
$s =curl_exec ($h);
Curl_close ($h);
Echo $s;
?>
$timeout =360;//Setting the time-out period
curl_setopt ($ch, Curlopt_connecttimeout, $timeout);
That's not the problem,-_-!!
$timeout =360;//Setting the time-out period
curl_setopt ($ch, Curlopt_connecttimeout, $timeout);