Title Description: Given a, b two files, each store 5 billion URLs, each URL occupies 64 bytes, memory limit is 4G, how to find a, b file common URL?
Analysis: Let's start by looking at how much space is needed to load all these URLs into memory.
1MB = 2^20 = 10^6 = 100W
1GB = 2^30 = 10^9 = 1 billion
5 billion = 5G * Byte = 320G
It is obviously not possible to load all of them into memory. We can use the following methods to solve:
Method 1:
With Bloom filter, assuming that the error rate of the filter is 0.01, the bit array size m is about 13 times times the number of input elements n, at which time the required hash function k is about 8.
Number of elements: n = 5G bit array size: M = 5G * = 65G = 65 billion that requires 65 billion bit bit to reach error rate 0.01 and we have memory to hold bit bit number: 4G * 8bit = 32G bit = 32 billion, by this implementation error rate is greater than 0.0 1.
Method 2:
Scan a A, b two files, according to hash (URL)%k (k is a positive integer, such as K = 1000, then each small file only occupies 300M, memory can be completely put down) to divide the URL into a different k files, such as A0,a1,.... a999;b0,b1, ... b999;
Such processing after the same URL must be in the corresponding small file (a0 vs B0,A1 vs B1,... a999 vs b999) because the same value of url%1000 must be the same, not the corresponding small file can not have the same URL;
Then we only ask for 1000 of the same URL in the small file. For example, for A0 vs B0, we can traverse the A0, place the URL in the Hash_map, and then traverse the B0, if a URL in B0 is in Hash_map, then the URL exists in a and B and is saved.
Given a, b two files, each store 5 billion URLs, each URL occupies 64 bytes, memory limit is 4G, how to find a, b file common URL?