Curl crawling times out. when curl is used to crawl other websites, it can be captured normally.
Reply to discussion (solution)
Amitabha, Shi zhu. if the basic programming cannot be solved, it will also be sent here.
Set_time_limit (0 );
Set_time_limit (0 );
Isn't that the reason?
@ Curl_setopt ($ ch, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.0 )");
Paste your code,
I am like this:
The returned result is normal. the file information is 165.56 KB.
This allows you to access this page in a browser. The key is that php is run in the background. Why can't I run it in the background?
Paste your code,
I am like this:
PHP code
$ H = curl_init ();
Curl_setopt ($ h, CURLOPT_URL, $ u );
$ S = curl_exec ($ h );
Curl_close ($ h );
Echo $ s;
?>
The returned result is normal. the file information is 165 ......
I can't do anything about this website. http://www.ydtuiguang.com/
Just ignore curl.
Set_time_limit (0 );
Var_dump (file_get_contents ("http://www.ydtuiguang.com /"));
I can. Is your error reported?
I can't do anything about this website. http://www.ydtuiguang.com/
I tried again today and saved this code as fblife. php file, and then execute php fblife under the windows command line. php can be output normally, but in linux, run php fblife. php, the same command can only get part of it. Does anyone know whether this is a problem with linux or other problems?
In addition, linux wget "http://www.fblife.com/" to execute this command can only get part
$ U = "http://www.fblife.com /";
$ H = curl_init ();
Curl_setopt ($ h, CURLOPT_URL, $ u );
$ S = curl_exec ($ h );
Curl_close ($ h );
Echo $ s;
?>
$ Timeout = 360; // Set the timeout value.
Curl_setopt ($ ch, CURLOPT_CONNECTTIMEOUT, $ timeout );
This is not the case -_-!!
$ Timeout = 360; // Set the timeout value.
Curl_setopt ($ ch, CURLOPT_CONNECTTIMEOUT, $ timeout );