This section: How to sort big data with Shell and PHP Big data problems, such as having a 4G file, how to use a machine with only 1G of memory to calculate the number of occurrences in a file (assuming 1 lines is an array, such as a QQ number). If this file is only 4 B or a few 10 trillion, then the simplest way is to read the file directly after the analysis statistics. But this is a 4G file, of course, it may be dozens of g or even hundreds of g of files, which is not directly read to solve. Also for such a large file, simply do with PHP is definitely not working, my idea is no matter how large files, first to cut for multiple applications can withstand small files, and then batch or in turn analyze the statistics of small files and then gross position the results of the summary of the final results to meet the requirements. Similar to the more popular MapReduce model, the core idea is "map" and "Reduce", plus Distributed file processing, of course I can understand and use only Reduce after processing. Suppose there is a 1 billion-line file, each line of a 6-bit-10-bit QQ number, then I need to solve is to calculate in the 1 billion QQ number, repeat the top 10 number, using the following PHP script to generate this file, it is likely that this random number will not appear duplicates, But let's say that there are repeating numbers. For example
- $fp = fopen (' qq.txt ', ' w+ ');
- for ($i =0; $i <1000000000; $i + +) {
- $str = Mt_rand (10000,9999999999). " \ n ";
- Fwrite ($fp, $STR);
- }
- Fclose ($FP);
Copy CodeThe world of Makefile is longer, and running PHP files directly under Linux can save time and, of course, generate files in other ways as well. php-client The generated file is approximately 11G. Then use the Linux split to cut the file, cutting the standard for 1 files per 1 million rows of data. split-l 1000000-a 3 qq.txt qqfileqq.txt is divided into the name of QQFILEAAA to qqfilebml 1000 files, each file 11mb size, then use any processing method will be relatively simple. Use PHP to analyze statistics:
- $results = Array ();
- foreach (Glob ('/tmp/qq/* ') as $file) {
- $fp = fopen ($file, ' R ');
- $arr = Array ();
- while ($qq = fgets ($fp)) {
- $QQ = Trim ($QQ);
- Isset ($arr [$qq])? $arr [$qq]++: $arr [$qq]=1;
- }
- Arsort ($arr);
- There are problems with the following processing methods
- do{
- $i = 0;
- foreach ($arr as $qq = = $times) {
- if ($i > 10) {
- Isset ($results [$qq])? $results [$qq]+= $times: $results [$qq]= $times;
- $i + +;
- } else {
- Break
- }
- }
- } while (false);
- Fclose ($FP);
- }
- if ($results) {
- Arsort ($results);
- do{
- $i = 0;
- foreach ($results as $qq = = $times) {
- if ($i > 10) {
- Echo $qq. "\ T". $times. "\ n";
- $i + +;
- } else {
- Break
- }
- }
- } while (false);
- }
Copy CodeSo each sample taken the first 10, and finally put together to analyze the statistics, do not exclude the number in each sample ranked 11th, but the total number of absolute in the first 10 of the possibility, so the statistical calculation algorithm needs to be improved. Some people may say that using the awk and sort commands in Linux can be sorted, but I tried to do it if it was a small file, but 11G of files, both memory and time, were unbearable. 1 awk+sort script:awk-f ' \\@ ' {name[$1]++} END {for (count in name) print Name[count],count} ' Qq.txt |sort-n > 123. TXT There is a lot of room for demand, whether it is large file processing or possible big data. |