Find an efficient and feasible way for PHP to collect a large number of webpages

Source: Internet
Author: User
Ask for an efficient and feasible way to collect a large number of web pages using php curl to collect music information from Xiami.
But it is very slow. when the collection reaches about 50, it will stop, and the webpage gets stuck. The second operation will not be able to collect data. it should be because the collection is not allowed after identification by IP address, therefore, data collection is very slow.
How can we collect such big data?
It may also be a problem with my code.
The following is some code.
$ J = 0; // start ID $ id = 200000; // collect 1000 records // save collected data $ data = array (); while ($ j <1000) {$ url =' http://www.xiami.com/song/ '. ($ Id ++); $ ch = curl_init (); $ status = curl_getinfo ($ ch); // $ status ['redirect _ url']; // The new address $ header [] = 'accept: text/html, application/xhtml + xml, application/xml; q = 0.9, image/webp ,*/*; q = 0.8 '; $ header [] = 'Accept-Encoding: gzip, deflate, sdch'; $ header [] = 'Accept-Language: zh-CN, zh; q = 0.8 '; $ header [] = 'cache-Control: max-age = 0'; $ header [] = 'connection: keep-alive '; $ header [] = 'cookie: _ unsign_token = a35437bd35c221c09a0e6f 564e17c225; _ gads = ID = 7fcc242f6fd63d77: T = 1408774454: S = Hangzhou; bd1__firstime = 1408774454639; _ xiamitoken = Shanghai; pnm_cku822 = 211n % pushed % 2 BzyLTPuc % 2B7wbrff98% 3D % 7CnOiH84T3jPCG % 2FIr % 2BiPOG8lI % 3D % pushed % 3D % pushed % 7 Cm % 2B % 2BT % pushed IyOLIYUqT % 2B9P % latest % 3D % 3D % latest % 2BmR64TzUw % 3D % 3D; CNZZDATA921634 = cnzz_eid % latest-% 26 ntime % 3D1408937320; CNZZDATA2629111 = cnzz_eid % audio-% 26 ntime % 3D1408937320; isg = 075E6FBDF77039CEB63A1BA239420244 '; $ header [] = 'host: www.xiami.com'; $ header [] = 'User-Agent: mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, Like Gecko) Chrome/32.0.1653.0 Safari/100'; curl_setopt ($ ch, CURLOPT_URL, $ url); // the url to access curl_setopt ($ ch, CURLOPT_HTTPHEADER, $ header ); // Set the http Header curl_setopt ($ ch, CURLOPT_HEADER, 0); // display the returned Header area content curl_setopt ($ ch, CURLOPT_RETURNTRANSFER, 1 ); // The obtained information returns curl_setopt ($ ch, CURLOPT_TIMEOUT, 20) as a file stream; // sets the timeout limit to prevent endless loops $ content = curl_exec ($ ch ); // execute the operation $ curl_errno = curl_errno ($ ch); $ curl_error = curl_error ($ ch); Curl_close ($ ch); // Close the CURL session preg_match ('/name = "description" \ s + content = "《(. +) "singer (. +), album 《(. +) "/', $ content, $ matches); // if the song name is empty, skip if (empty ($ matches [1]) | trim ($ matches [1]) = '') {continue ;} // matched data $ data [$ id] ['song'] = empty ($ matches [1])? '': $ Matches [1]; $ data [$ id] ['songer '] = empty ($ matches [2])? '': $ Matches [2]; $ data [$ id] ['alipay'] = empty ($ matches [3])? '': $ Matches [3]; preg_match ('/album \/(\ d +)/', $ content, $ matches ); $ data [$ id] ['alipay'] = empty ($ matches [1])? 0: $ matches [1]; preg_match ('/\/artist \/(\ d +)/', $ content, $ matches ); $ data [$ id] ['songerid'] = empty ($ matches [1])? 0: $ matches [1]; // lyrics

Preg_match ('/

(. *) <\/P>/us', $ content, $ matches); $ data [$ id] ['lrc '] = empty ($ matches [1])? '': Addslashes ($ matches [1]); // share(3269)Preg_match ('/share\ (\ D +) \) <\/em>/Us ', $ content, $ matches ); $ data [$ id] ['share'] = empty ($ matches [1])? 0: $ matches [1]; // number of comments

920preg_match ('/

(\ D +) <\/span>/'', $ content, $ matches ); $ data [$ id] ['Comment _ count'] = empty ($ matches [1])? 0: $ matches [1]; // data warehouse receiving operation // print_r ($ data); // _________________________ $ j ++; usleep (3000 );}






Reply to discussion (solution)

Use the snoopy class.

Use Ruby or Go

Joke, even if you want to run well, you can run it in command line mode ....

It should be because the xiami.com server has restrictions. disable collection.

1. each url request only supports 10-20 hits, and then performs a jump to continue collection. This also prevents page timeout. if you run on a VM, use the cpu for a long time, the process may be killed.

2. it is best to modify the user-agent and cookies in each url request header.

3. if not, try it with the locomotive!

4. if the train doesn't work either, give up the station!

Split foreach into loops to execute the same page.
The first time the browser or cronrab regularly executes http: // localhost/caiji. php? Num = 1 after each completion, $ _ GET ['num'] + 1; curl repeats l to execute the same script. when $ _ GET ['num'] = 1000, exit and no curl is executed.

If ($ _ GET ['num']) {$ url = 'http: // future) ++;} if ($ _ GET ['num'] <1001) {$ ch = curl_init (); curl_setopt ($ ch, CURLOPT_URL, "http: // localhost/caiji. php? Num = ". $ _ GET ['num']); curl_setopt ($ ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ ch, CURLOPT_CONNECTTIMEOUT, 2); curl_setopt ($ ch, CURLOPT_TIMEOUT, 2); curl_exec ($ ch); curl_close ($ ch);} else {exit ;}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.