Top K of massive data statistics

Source: Internet
Author: User

There is a 1 GB file with each line containing a word. The word size cannot exceed 16 bytes and the memory size is limited to 1 MB. Returns the top 100 words with the highest frequency.

Ideas:

It is impossible to read all the 1 GB Data into the memory at a time. You can read a row each time and store the word in a hash table. The value of the hash table is the number of times the word appears.

The problem is, how big is the hash table and whether it can load 1 MB of memory.

Assume that each word in the 1g file is different, so there are at most 1g/1 byte = 1g words. A hash table node contains the word (key ), frequency (value), next pointer, the memory must be at least 24 bytes * 1g, which is obviously large. However, if the question tells us that there are at most 1 million different words, then 24bytes * 1 m = 24 m. For most machines, this hash table can be created, of course, the memory for this question is only 1 MB, and the hash table of 24 MB cannot be installed.

Therefore, our first step is to divide all words into different files, and the same words into the same files. In this way, the size of the file is smaller, and the hash table created is smaller.

Modulo the word hash value to 5000 and assign the word to 5000 files based on the result. In this way, on average, a file contains 1g/5000 = m words, and the hash table can basically be installed.

Perform hashmap statistics on each file, write the word and frequency to a new file, and get 5000 new files.

Maintain a minimum heap of 100 nodes and read each record of 5000 files in sequence. If the frequency is less than the heap top, it is proved that the word frequency is lower than the 100 words in the heap. It is impossible to enter the top and discard the word. If the frequency is greater than the heap top, the word is used as the heap top, and then the maintenance function is called to maintain the minimum heap nature. All records are traversed, and the words in the minimum heap are the results.

Summary:

The size of the hash table is not based on the number of words, but on the number of different words.

The largest topk uses the smallest heap, and the smallest topk uses the largest heap.

Time complexity of the algorithm:

Sub-small file O (N)

Hashmap statistics O (N)

Maintain the minimum heap O (N 'logk) n' as the number of different words, and K as the topk

Top K of massive data statistics

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.