Big Data algorithm problem (i)

Source: Internet
Author: User

Category: Massive data processing surface questions

1. Massive log data, extract the most visited Baidu one day the most number of that IP.

The first is this day, and is to visit Baidu's log in the IP out to write to a large file. Note that the IP is 32-bit and has a maximum of 2^32 IP. The same can be used to map the method, such as module 1000, the entire large file mapping to 1000 small files, in each small file to find the largest frequency of the IP (can be used hash_map frequency statistics, and then find the largest number of frequencies) and the corresponding frequency. Then in the 1000 largest IP, find out the most frequent IP, that is, the request.

Or as explained below:

Algorithm idea: Divide and conquer +hash

The 1.IP address has a maximum of 2^32=4g, so it can not be fully loaded into memory processing;

2. You can consider the idea of "divide and conquer", according to the IP address of the hash (IP)%1024, the vast amount of IP logs stored in 1024 small files. Thus, each small file contains a maximum of 4MB IP addresses;

3. For each small file, you can build a hash map with the IP key, the number of occurrences, and the most current occurrence of the IP address;

4. You can get the most occurrences of IP in 1024 small files, and then get the most occurrences of IP based on the general sorting algorithm;

2. The search engine logs all the retrieval strings used by the user each time through the log file, and the length of each query string is 1-255 bytes;

Suppose there are currently 10 million records (the repeat reads of these query strings are relatively high, although the total is 10 million, but if you go out and repeat, no more than 3 million.) The higher the repetition of a query string, the more users are queried for it, the more popular it is. Please count the most popular 10 query strings, requiring no more than 1g of memory to use.

First, the batch of massive data preprocessing, in O (N) time with the hash table to complete the statistics.

In the second step, the data structure of the heap is used to find the top K and the time complexity is nlogk.

That is, with the help of the heap structure, I can find and adjust/move within the time of the log magnitude. Therefore, maintain a K (the subject is 10) size of small Gan, and then traverse 3 million of the query, respectively, and the root element to compare, so our final time complexity is: O (n) +n ' *o (LOGK), (n 10 million, n· 3 million).

Alternatively, using the trie tree, the number of occurrences of the query string in the keyword field does not appear to be 0. Finally, the occurrence frequency is sorted by the maximum heap of 10 elements.

Big Data algorithm problem (i)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.