Analysis of user data captured by millions of users-PHP development _ php instance

Source: Internet
Author: User
This article mainly introduces the PHP development related to data capturing and Analysis for millions of users. For more information, see this article. The data analysis result is as follows:

Preparations before development

Install Ubuntu on the vmwarevm;

Install PHP5.6 or later;

Install curl and pcntl extensions.

Use PHP curl extension to capture page data

PHP's curl extension is a PHP-supported library that allows you to connect to and communicate with various servers using various protocols.

This program captures user data. to access a user's personal page, you must log on to the user before accessing the user. When we click a user profile picture link on the browser page to go to the user's personal Center page, we can see the user information because when we click the link, the browser can help you bring local cookies to the new page, so you can enter the user's personal Center page. Therefore, before accessing a personal page, you must first obtain the user's cookie information, and then carry the cookie information at each curl request. I used my own cookie to obtain the cookie information. On the page, I can see my own cookie information:

Copy them one by one, using "_ utma = ?; _ Utmb = ?; "This form forms a cookie string. Next, you can use this cookie string to send requests.

Initial example:

$ Url = 'HTTP: // www.zhihu.com/lele/mora-hu/about'; // here, mora-hu represents the user ID $ ch = curl_init ($ url); // initialize the session curl_setopt ($ ch, CURLOPT_HEADER, 0 ); curl_setopt ($ ch, CURLOPT_COOKIE, $ this-> config_arr ['user _ cooker']); // sets the request cookie curl_setopt ($ ch, CURLOPT_USERAGENT, $ _ SERVER ['HTTP _ USER_AGENT ']); curl_setopt ($ ch, CURLOPT_RETURNTRANSFER, 1); // returns the information obtained by curl_exec () as a file stream, instead of direct output. Curl_setopt ($ ch, CURLOPT_FOLLOWLOCATION, 1); $ result = curl_exec ($ ch); return $ result; // The captured result

Run the above Code to obtain the personal Center page of The mora-hu user. You can use this result to process the page with a regular expression to obtain the information that needs to be crawled, such as name and gender.

Image anti-leech

When the returned results are output after regular expression processing, the user profile cannot be displayed when the user profile is output on the page. After reading the information, I learned that zhihu has done anti-leech processing on the image. The solution is to forge a referer in the request header when requesting images.

After obtaining the image link using a regular expression, send a request again. The source of the request with a piece indicates that the request comes from the forwarding of zhihu website. An example is as follows:

Function getImg ($ url, $ u_id) {if (file_exists ('. /images /'. $ u_id. ". jpg ") {return" images/$ u_id ". '.jpg ';} if (empty ($ url) {return '';} $ context_options = array ('HTTP' => array ('header' =>" Referer: http://www.zhihu.com "// With referer parameters); $ context = stream_context_create ($ context_options); $ img = file_get_contents ('HTTP :'. $ url, FALSE, $ context); file_put_contents ('. /images /'. $ u_id. ". jpg ", $ img); return" images/$ u_id ". '.jpg ';}

After capturing your personal information, you need to access the user's followers and the list of users you have followed to obtain more user information. Then access the service layer by layer. As you can see, there are two links in the personal Center page:

There are two links here. One is followed, and the other is followed. The Link "followed" is used as an example. Use regular expression matching to match the corresponding link. After obtaining the url, use curl to carry the cookie and send a request again. After capturing the list page followed by the user, you can get the following page:

Analyze the html structure of the page. As long as you get the user information, you only need to enclose the p content and the user name. We can see that the url of the page that the user follows is:

The URLs of different users are almost the same. The difference lies in the user name. Get the username list using regular expression matching, splice URLs one by one, and then send requests one by one (of course, one by one is relatively slow. There is a solution below, which will be discussed later ). After entering the new user page, repeat the above steps to keep repeating until the data size you want is reached.

Linux statistics file count

After the script has been running for a while, you need to check how many images are obtained. When the data volume is large, it is a little slow to open the folder and check the number of images. The script runs in a Linux environment, so you can use the Linux Command to count the number of files:

The Code is as follows:


Ls-l | grep "^-" | wc-l


Here, ls-l is the long list that outputs the file information under this directory (the files here can be directories, links, device files, etc ); grep "^-" filters long list output information. "^-" Only retains general files. If only the directory is "^ d", wc-l indicates the number of rows of Statistics output information. The following is an example:

Process duplicate data when inserting MySQL

After running the program for a period of time, we found that many users have duplicate data. Therefore, we need to process the data when inserting duplicate user data. The solution is as follows:

1) check whether the data already exists in the database before the database is inserted;

2) Add a unique index. insert into... on duplicate key update...

3) Add a unique index. insert ingnore...

4) Add a unique index. replace...

Use curl_multi to capture pages with multiple threads

When a single process is started and a single curl is used to capture data, the speed is very slow. When the host crawls for one night, it can only capture 2 W of data, so I thought that I could request multiple users at a time when I sent a curl request to the new user page. Later I found the good thing curl_multi. Functions such as curl_multi can request multiple URLs at the same time, rather than one request. This is similar to the function of multiple threads in a process in linux. The following is an example of using curl_multi to implement multi-thread crawler:

$ Mh = curl_multi_init (); // return a new cURL batch handle for ($ I =; $ I <$ max_size; $ I ++) {$ ch = curl_init (); // initialize a single cURL session curl_setopt ($ ch, CURLOPT_HEADER,); curl_setopt ($ ch, CURLOPT_URL ,' http://www.zhihu.com/people/ '. $ User_list [$ I]. '/about'); curl_setopt ($ ch, CURLOPT_COOKIE, self: $ user_cookie); curl_setopt ($ ch, CURLOPT_USERAGENT, 'mozilla /. (Windows NT .; WOW) AppleWebKit /. (KHTML, like Gecko) Chrome /... safari /. '); curl_setopt ($ ch, CURLOPT_RETURNTRANSFER, true); curl_setopt ($ ch, CURLOPT_FOLLOWLOCATION,); $ requestMap [$ I] = $ ch; curl_multi_add_handle ($ mh, $ ch); // Add a separate curl handle to the curl batch processing session} $ user_arr = arra Y (); do {// run the subconnection of the current cURL handle while ($ cme = curl_multi_exec ($ mh, $ active) = CURLM_CALL_MULTI_PERFORM); if ($ cme! = CURLM_ OK) {break;} // obtain the relevant transmission information of the currently resolved cURL while ($ done = curl_multi_info_read ($ mh )) {$ info = curl_getinfo ($ done ['handle']); $ tmp_result = curl_multi_getcontent ($ done ['handle']); $ error = curl_error ($ done ['handle']); $ user_arr [] = array_values (getUserInfo ($ tmp_result )); // ensure that $ max_size requests are simultaneously processed in if ($ I <sizeof ($ user_list) & isset ($ user_list [$ I]) & $ I <count ($ user_list) {$ ch = curl_init (); curl_setopt ($ ch, CURLOPT_HEADER,); curl_setopt ($ ch, CURLOPT_URL ,' http://www.zhihu.com/people/ '. $ User_list [$ I]. '/about'); curl_setopt ($ ch, CURLOPT_COOKIE, self: $ user_cookie); curl_setopt ($ ch, CURLOPT_USERAGENT, 'mozilla /. (Windows NT .; WOW) AppleWebKit /. (KHTML, like Gecko) Chrome /... safari /. '); curl_setopt ($ ch, CURLOPT_RETURNTRANSFER, true); curl_setopt ($ ch, CURLOPT_FOLLOWLOCATION,); $ requestMap [$ I] = $ ch; curl_multi_add_handle ($ mh, $ ch); $ I ++;} curl_multi_remove_handle ($ mh, $ done ['handle']);} if ($ active) curl_multi_select ($ mh ,);} while ($ active); curl_multi_close ($ mh); return $ user_arr;

HTTP 429 Too required Requests

You can use the curl_multi function to send multiple requests at the same time. However, when 200 requests are simultaneously sent during execution, many requests cannot be returned, that is, packet loss. For further analysis, use the curl_getinfo function to print each request handle information. This function returns an associated array containing HTTP response information. One of the fields is http_code, indicating the HTTP status code returned by the request. We can see that the http_code of many requests is 429. This return code indicates that too many requests are sent. I guess I knew that I had anti-crawler protection, so I took other websites for testing and found that it was okay to send 200 requests at a time, proving my guess, zhihu has provided protection in this regard, that is, there is a limit on the number of one-time requests. So I continuously reduced the number of requests and found that there was no packet loss at the time of 5. It indicates that only five requests can be sent at a time in this program. Although there are not many requests, this is also a small improvement.

Use Redis to save users who have already accessed

During the process of capturing a user, it is found that some users have already accessed the user, and their followers and followers have already obtained the user, although repeated data is processed at the database layer, the program still uses curl to send requests, so that repeated requests have a lot of repetitive network overhead. Another is that the user to be crawled needs to temporarily save it in one place for the next execution. At the beginning, it is placed in the array and later found that multiple processes should be added to the program, in multi-process programming, sub-processes share program code and function libraries, but the variables used by the process are different from those used by other processes. Variables in different processes are separated and cannot be read by other processes. Therefore, arrays are not allowed. Therefore, we thought of using Redis cache to save users that have already been processed and users to be captured. In this way, the user is pushed to an already_request_queue queue every time the execution is complete, and the user to be crawled (that is, each user's referers and the list of users concerned) is pushed to request_queue, then, pop a user from request_queue before each execution, and then judge whether the user is in already_request_queue. If the user is in, perform the next step; otherwise, continue the execution.

Example of using redis in PHP:

<?php  $redis = new Redis();  $redis->connect('127.0.0.1', '6379');  $redis->set('tmp', 'value');  if ($redis->exists('tmp'))  {    echo $redis->get('tmp') . "\n";  }

Use the pcntl extension of PHP to implement multiple processes

After using the curl_multi function to capture user information in multiple threads, the program runs for one night and the final data obtained is. I still cannot achieve my ideal goal, so I continued to optimize it. Later I found that there is a pcntl extension in php that can implement multi-process programming. The following is an example of multi-programming:

// PHP multi-process demo // fork10 processes for ($ I = 0; $ I <10; $ I ++) {$ pid = pcntl_fork (); if ($ pid =-1) {echo "cocould not fork! \ N "; exit (1) ;}if (! $ Pid) {echo "child process $ I running \ n"; // exit after the sub-process is executed to prevent the new sub-process exit ($ I );}} // wait until the sub-process is completed to avoid the occurrence of the zombie process while (pcntl_waitpid (0, $ status )! =-1) {$ status = pcntl_wexitstatus ($ status); echo "Child $ status completed \ n ";}

View the cpu information of the system in Linux

After implementing multi-process programming, I thought about opening several more processes to capture user data constantly. Then I started the eight-step process and ran it for one night and found that I could only get 20 million data, not much improvement. Therefore, I found that, according to the CPU performance optimization optimized by the system, the maximum number of processes in the program cannot be provided at will, but should be given according to the number of CPU cores, the maximum number of processes should be twice the number of cpu cores. Therefore, you need to view the cpu information to see the number of cpu cores. Command for viewing cpu information in Linux:

The Code is as follows:


Cat/proc/cpuinfo

Model name indicates the cpu type information, and cpu cores indicates the number of cpu cores. Here, the number of cores is 1. Because it is run under a virtual machine and the number of cpu cores allocated is relatively small, only two processes can be started. The final result is that 1.1 million of user data is captured over a weekend.

Connection between Redis and MySQL in multi-process Programming

After running the program for a period of time under multi-process conditions, if the data cannot be inserted into the database, the mysql too connector connections error will be reported, as is redis.

The following code fails to be executed:

<?php   for ($i = 0; $i < 10; $i++) {     $pid = pcntl_fork();     if ($pid == -1) {        echo "Could not fork!\n";        exit(1);     }     if (!$pid) {        $redis = PRedis::getInstance();        // do something           exit;     }   }

The root cause is that when each sub-process is created, it has inherited the parent process from a completely identical copy. Objects can be copied, but created Connections cannot be copied into multiple ones. The result is that each process uses the same redis connection to perform various operations, eventually, an inexplicable conflict occurs.

Solution:
The program cannot completely guarantee that the parent process will not create a redis connection instance before the fork process. Therefore, the sub-process itself is the only way to solve this problem. Imagine that if the Instance obtained in the sub-process is only related to the current process, this problem will not exist. The solution is to slightly modify the static method of redis class instantiation and bind it with the current process ID.
The transformed code is as follows:

<? Php public static function getInstance () {static $ instances = array (); $ key = getmypid (); // obtain the current process ID if ($ empty ($ instances [$ key]) {$ inctances [$ key] = new self ();} return $ instances [$ key];}

PHP statistics script execution time

Because you want to know the time spent by each process, write a function to count the script execution time:

function microtime_float(){   list($u_sec, $sec) = explode(' ', microtime());   return (floatval($u_sec) + floatval($sec));}$start_time = microtime_float();//do somethingusleep(100);$end_time = microtime_float();$total_time = $end_time - $start_time;$time_cost = sprintf("%.10f", $total_time);echo "program cost total " . $time_cost . "s\n";

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.