Method 1: Use fsockopen
Curl_getinfo!
The code is as follows: |
Copy code |
Function get_http_code ($ url = "localhost", $ port = 80, $ fsock_timeout = 10 ){ Set_time_limit (0 ); Ignore_user_abort (true ); // Record start time List ($ usec, $ sec) = explode ("", microtime (true )); $ Timer ['start'] = (float) $ usec + (float) $ sec; // Verify the URL If (! Preg_match ("/^ https? : \/\ // I ", $ url )){ $ Url = "http: //". $ url; } // Supports HTTPS If (preg_match ("/^ https: \// I", $ url )){ $ Port = 443; } // Parse the URL $ Urlinfo = parse_url ($ url ); If (empty ($ urlinfo ['path']) { $ Urlinfo ['path'] = '/'; } $ Host = $ urlinfo ['host']; $ Uri = $ urlinfo ['path']. (empty ($ urlinfo ['query'])? '': $ Urlinfo ['query']); // Open the connection through fsock If (! $ Fp = fsockopen ($ host, $ port, $ errno, $ error, $ fsock_timeout )){ List ($ usec, $ sec) = explode ("", microtime (true )); $ Timer ['end'] = (float) $ usec + (float) $ sec; $ Usetime = (float) $ timer ['end']-(float) $ timer ['start']; Return array ('code' =>-1, 'usetime' => $ usetime ); } // Submit the request $ Status = socket_get_status ($ fp ); $ Out = "GET {$ uri} HTTP/1.1 \ r \ n "; $ Out. = "Host: {$ host} \ r \ n "; $ Out. = "Connection: Close \ r \ n "; $ Write = fwrite ($ fp, $ out ); If (! $ Write ){ List ($ usec, $ sec) = explode ("", microtime (true )); $ Timer ['end'] = (float) $ usec + (float) $ sec; $ Usetime = (float) $ timer ['end']-(float) $ timer ['start']; Return array ('code' =>-2, 'usetime' => $ usetime ); } $ Ret = fgets ($ fp, 1024 ); Preg_match ("/http \/\ d \. \ d \ s (\ d +)/I", $ ret, $ m ); $ Code = $ m [1]; Fclose ($ fp ); List ($ usec, $ sec) = explode ("", microtime (true )); $ Timer ['end'] = (float) $ usec + (float) $ sec; $ Usetime = (float) $ timer ['end']-(float) $ timer ['start']; Return array ('code' => $ code, 'usetime' => $ usetime ); } |
File_get_contents is a simple package of the fsockopen function, which is less efficient, but the capture success rate is very high. So I usually come here when snoopy goes wrong. 5.0.0 adds support for context. With context, it can also send header information, custom user agent, referer, and cookies. 5.1.0 added the offset and maxlen parameters to read only part of the file.
Method 2: Use snoopy. class. php
Snoopy is a php class used to simulate browser functions. It can obtain webpage content and send forms.
The code is as follows: |
Copy code |
$ Ch = curl_init (); Curl_setopt ($ ch, CURLOPT_URL, 'http: // www.spiegel.de /'); Curl_setopt ($ ch, CURLOPT_RANGE, '0-500 '); Curl_setopt ($ ch, CURLOPT_BINARYTRANSFER, 1 ); Curl_setopt ($ ch, CURLOPT_RETURNTRANSFER, 1 ); $ Result = curl_exec ($ ch ); Curl_close ($ ch ); Echo $ result; /** * But as noted before if the server doesn't honor this header but sends the whole file curl will download all of it. e. g. http://www.111cn.net ignores the header. but you can (in addition) set a write function callback and abort the request when more data is stored ed, e.g. * Php 5.3 + only * Use function writefn ($ ch, $ chunk) {...} for earlier versions */ $ Writefn = function ($ ch, $ chunk ){ Static $ data = ''; Static $ limit = 500; // 500 bytes, it's only a test $ Len = strlen ($ data) + strlen ($ chunk ); If ($ len >=$ limit ){ $ Data. = substr ($ chunk, 0, $ limit-strlen ($ data )); Echo strlen ($ data), '', $ data; Return-1; } $ Data. = $ chunk; Return strlen ($ chunk ); }; $ Ch = curl_init (); Curl_setopt ($ ch, CURLOPT_URL, 'http: // www.111cn.net /'); Curl_setopt ($ ch, CURLOPT_RANGE, '0-500 '); Curl_setopt ($ ch, CURLOPT_BINARYTRANSFER, 1 ); Curl_setopt ($ ch, CURLOPT_WRITEFUNCTION, $ writefn ); $ Result = curl_exec ($ ch ); Curl_close ($ ch ); |
Some common status codes are:
200-the server returns the webpage successfully
404-the requested webpage does not exist
503-server timeout
301-page redirection