Curl_get_contents is more stable than file_get_contents. Share A actually used function: Copy the code as follows: * it is much more stable than file_get_contents! $ Timeout is the timeout time in seconds. the default value is 1 s. * Functioncurl_get_cont:
The code is as follows:
/* More stable than file_get_contents! $ Timeout is the timeout time in seconds. the default value is 1 s. */
Function curl_get_contents ($ url, $ timeout = 1 ){
$ CurlHandle = curl_init ();
Curl_setopt ($ curlHandle, CURLOPT_URL, $ url );
Curl_setopt ($ curlHandle, CURLOPT_RETURNTRANSFER, 1 );
Curl_setopt ($ curlHandle, CURLOPT_TIMEOUT, $ timeout );
$ Result = curl_exec ($ curlHandle );
Curl_close ($ curlHandle );
Return $ result;
}
$ Hx = curl_get_contents ('http: // www.jb51.net ');
I believe all the friends who have used the file_get_contents function know that when the obtained $ url cannot be accessed, it will lead to a long wait for the page, and even cause the PHP process to occupy 100% of the CPU, so this function was born. Curl introduction
The reason for retaining the original file_get_contents function is that it is more appropriate to use native file_get_contents when reading local files.
Another from the banquet file_get_contnets optimization, see: http://www.jb51.net/article/28030.htm
Set the timeout time to solve this problem. If curl is not installed, you must use this method.
The code is as follows:
$ Ctx = stream_context_create (array (
'Http' => array (
'Timeout' => 1 // set a timeout time, in seconds
)
)
);
File_get_contents ("http://www.jb51.net/", 0, $ ctx );
In addition, due to incomplete tests, the use of curl to retrieve pages is much more stable than file_get_contents.
The pipeline code is as follows:/* it is more stable than file_get_contents! $ Timeout is the timeout time in seconds. the default value is 1 s. */Function curl_get_cont...