PHP single-threaded web page capture in parallel
This PHP tutorial simulates the process of capturing multiple page information in parallel. The key lies in the parallel processing of a single thread.
Generally, programs that capture multiple page information use a serial scheme, but the acquisition cycle is too long and is not practical. So I want to use curl for parallel capturing. However, we finally found that there is no curl on the virtual server, which is really confusing. So I decided to change my mind and use a single thread to implement the effects of multiple threads. I want to program the network.
I understand the concept of IO reuse. Of course, PHP also supports it, and internal support does not require any extension.
People with many years of programming experience may not know much about the stream function of PHP. PHP's compressed file streams, file streams, and applications under the tcp protocol are encapsulated into a stream. Therefore, read local files
There is no difference with reading network files. After talking about this, I think everyone understands it. paste the Code directly:
The code is rough. If you need to use it, you still need to deal with some details.
Code
Function http_get_open ($ url)
{
$ Url = parse_url ($ url );
If (empty ($ url ['host']) {
Return false;
}
$ Host = $ url ['host'];
If (empty ($ url ['path']) {
$ Url ['path'] = "/";
}
$ Get = $ url ['path']. "? ". @ $ Url ['query'];
$ Fp = stream_socket_client ("tcp: // {$ host}: 80", $ errno, $ errstr, 30 );
If (! $ Fp ){
Echo "$ errstr ($ errno)
\ N ";
Return false;
} Else {
Fwrite ($ fp, "GET {$ get} HTTP/1.0 \ r \ nHost: {$ host} \ r \ nAccept: */* \ r \ n ");
}
Return $ fp;
}
Function http_multi_get ($ urls)
{
$ Result = array ();
$ Fps = array ();
Foreach ($ urls as $ key => $ url)
{
$ Fp = http_get_open ($ url );
If ($ fp = false ){
$ Result [$ key] = false;
} Else {
$ Result [$ key] = '';
$ Fps [$ key] = $ fp;
}
}
While (1)
{
$ Reads = $ fps;
If (empty ($ reads )){
Break;
}
If ($ num = stream_select ($ reads, $ w = null, $ e = null, 30) = false ){
Echo "error ";
Return false;
} Else if ($ num> 0) {// can read
Foreach ($ reads as $ value)
{
$ Key = array_search ($ value, $ fps );
If (! Feof ($ value )){
$ Result [$ key]. = fread ($ value, 128 );
} Else {
Unset ($ fps [$ key]);
}
}
} Else {// time out
Echo "timeout ";
Return false;
}
}
Foreach ($ result as $ key =>&$ value)
{
If ($ value ){
$ Value = explode ("\ r \ n", $ value, 2 );
}
}
Return $ result;
}
$ Urls = array ();
$ Urls [] = "http://www.qq.com ";
$ Urls [] = "http://www.sina.com.cn ";
$ Urls [] = "http://www.sohu.com ";
$ Urls [] = "http://www.blue1000.com ";
// Parallel capturing
$ T1 = microtime (true );
$ Result = http_multi_get ($ urls );
$ T1 = microtime (true)-$ t1;
Var_dump ("cost:". $ t1 );
// Serial capture
$ T1 = microtime (true );
Foreach ($ urls as $ value)
{
File_get_contents ($ value );
}
$ T1 = microtime (true)-$ t1;
Var_dump ("cost:". $ t1 );
?>
The final running result:
String 'cost: 3.2403128147125 '(length = 21)
String 'cost: 6.2333900928497 '(length = 21)
Basically, it is twice the efficiency. Of course, it is found that Sina is very slow, it takes about 2.5 s,
Basically it was dragged down by him, as long as 0.2 s
If the speed of all websites is almost the same, and the number of parallel jobs is larger, the greater the difference.