This PHP tutorial will simulate the process of crawling multiple page information in parallel, and the key lies in the parallel processing of single thread.
Under normal circumstances, we write to crawl multiple page information program are adopted serial scheme, but the acquisition cycle is too long, not practical. So I thought of using curl to crawl in parallel. However, finally found that the virtual server is not curl, this is really a struggle. So I decided to change my mind and implement the effects of multiple threads with a single thread. I want to do a little bit of network programming
The person who knows is sure to know the concept of Io reuse, of course, PHP is also supported, and, internal support, does not require any extensions.
People who may have many years of programming experience may not know much about the stream function of PHP. PHP's compressed file stream, file stream, and TCP protocol applications are encapsulated into a stream. So, read the local file
There is no difference between reading a network file. Said so much, I think we all basically understand, directly affixed to the code:
Code comparison of rough, if you want to actually use, or to deal with a number of details.
Code
function Http_get_open ($url)
{
$url = Parse_url ($url);
if (Empty ($url [' Host '])) {
return false;
}
$host = $url [' Host '];
if (Empty ($url [' path '])) {
$url [' path '] = "/";
}
$get = $url [' Path ']. "?" . @ $url [' query '];
$fp = Stream_socket_client ("tcp://{$host}:80", $errno, $errstr, 30);
if (! $fp) {
echo "$errstr ($errno)
n ";
return false;
} else {
Fwrite ($fp, "get {$get} http/1.0rnhost: {$host}rnaccept: */*rnrn");
}
return $fp;
}
function Http_multi_get ($urls)
{
$result = Array ();
$fps = Array ();
foreach ($urls as $key => $url)
{
$fp = Http_get_open ($url);
if ($fp = = False) {
$result [$key] = false;
} else {
$result [$key] = ';
$fps [$key] = $fp;
}
}
while (1)
{
$reads = $fps;
if (empty ($reads)) {
Break
}
if ($num = Stream_select ($reads, $w = null, $e = null,) = = False) {
echo "Error";
return false;
else if ($num > 0) {//can read
foreach ($reads as $value)
{
$key = Array_search ($value, $fps);
if (!feof ($value)) {
$result [$key]. = Fread ($value, 128);
} else {
Unset ($fps [$key]);
}
}
else {//time out
echo "Timeout";
return false;
}
}
foreach ($result as $key => & $value)
{
if ($value) {
$value = Explode ("Rnrn", $value, 2);
}
}
return $result;
}
$urls = Array ();
$urls [] = "http://www.qq.com";
$urls [] = "http://www.sina.com.cn";
$urls [] = "http://www.sohu.com";
$urls [] = "http://www.blue1000.com";
Parallel crawl
$t 1 = microtime (true);
$result = Http_multi_get ($urls);
$t 1 = microtime (True)-$t 1;
Var_dump ("Cost:"). $t 1);
Serial crawl
$t 1 = microtime (true);
foreach ($urls as $value)
{
File_get_contents ($value);
}
$t 1 = microtime (True)-$t 1;
Var_dump ("Cost:"). $t 1);
?>
Results of the last run:
String ' cost:3.2403128147125 ' (length=21)
String ' cost:6.2333900928497 ' (length=21)
is basically twice times the efficiency, of course, to find Sina very slow, to 2.5s or so,
It's basically a drag on him, 360 to 0.2s.
If all sites are about the same speed, the number of parallel is larger, then the difference is greater.