I use simple_html_dom to crawl a webpage and use the object-oriented method, but timeout may occur. {Code...} I used the above Code for time-limited processing, but it does not work. When the time exceeds 10000 seconds, the script will exit, but I want a request to terminate after timeout, and then... I use simple_html_dom to crawl a webpage and use the object-oriented method, but timeout may occur.
set_time_limit(10000);ini_set('default_socket_timeout', 5);$context = stream_context_create( array( 'http'=>array( 'method' => 'GET', 'timeout' => 5 ), ));$shd->load_file($player_url, false, $contex);
I use the above Code for time-limited processing, but it does not work. The script will exit when the time exceeds 10000 seconds, but I want a request to terminate after timeout, and then initiate a request again or make the next request. Is there a good solution for God?
Reply content:
I use simple_html_dom to crawl a webpage and use the object-oriented method, but timeout may occur.
set_time_limit(10000);ini_set('default_socket_timeout', 5);$context = stream_context_create( array( 'http'=>array( 'method' => 'GET', 'timeout' => 5 ), ));$shd->load_file($player_url, false, $contex);
I use the above Code for time-limited processing, but it does not work. The script will exit when the time exceeds 10000 seconds, but I want a request to terminate after timeout, and then initiate a request again or make the next request. Is there a good solution for God?
Do not directly use the interfaces It provides to obtain content on the network. Although it has this capability, it is only used for debugging. In actual situations, it is easy to encounter timeout as described in your problem, so you 'd better first usecurl
Interface to obtain the content, and then usesimple_html_dom
The former can easily handle various network errors.
Function get_html_by_url ($ url, $ timeout = 5) {$ ch = curl_init (); curl_setopt ($ ch, CURLOPT_URL, $ url); curl_setopt ($ ch, CURLOPT_HEADER, false ); curl_setopt ($ ch, CURLOPT_RETURNTRANSFER, true); // automatically identifies 301 jump curl_setopt ($ ch, CURLOPT_FOLLOWLOCATION, true); // sets various timeout limits curl_setopt ($ ch, CURLOPT_CONNECTTIMEOUT, $ timeout); curl_setopt ($ ch, CURLOPT_TIMEOUT, $ timeout); $ html = curl_exec ($ ch); // handle various errors if (f Alse ===$ html) {return false;} // handle http errors if (200! = Curl_getinfo ($ ch, CURLINFO_HTTP_CODE) {return false;} return $ html;} // use $ html = get_html_by_url directly ('HTTP: // www.sina.com.cn ', 5 ); // use simple_html_dom to load if (false! ==$ Html) {$ shd-> load ($ html );}
CooperationSet_time_limit (0 );, Increase default_socket_timeout as needed