Example
There is a 1G size of a file, each line is a word, the size of the word does not exceed 16 bytes, memory limit size is 1M. Returns the highest frequency of 100 words.
Ideas
- First, separate the files.
- For each file hash traversal, count the frequency of each word
- Traversing using a heap
- Merge the piles together.
Specific programmes
1. Divide and Conquer:
In sequential read file, for each word C, take hash (c)%2000, and then save to 2000 small files according to this value. So each file is about 500k or so.
Attention:
If one of the files exceeds the 1M size, you can continue to do so in a similar way until the size of the resulting small file is less than 1M.
2.hash Traversal:
For each small file, the words that appear in each file and the corresponding frequency are counted in a hash way.
3. Heap Traversal:
The most frequent 100 words were taken out with the minimum heap, and the 100 words and corresponding frequencies were deposited into the file, so that 5,000 files were obtained.
4. Merge Integration
The next step is to merge the 5,000 files (similar to the merge sort) process.
Massive data processing algorithm (top K problem)