Given a, b two files, each store 5 billion URLs, each URL occupies 64 bytes, memory limit is 4G, how to find a, b file common URL?
You can estimate the size of each file as5g*64=300g, far greater than4G. So it is not possible to fully load it into memory for processing. Consider adopting a divide-and-conquer approach.
Traversing filesA, for eachURL FetchHash (URL)%1000, and then, based on the resulting value,URLs are stored separately in the1000 Small files (set toA0,A1,... a999). So the size of each small file is about300m. Traverse file b, take and a the same way url are stored in 1000 small Files Span lang= "en-us" > (b0,b1....b999). After this processing, all possibly the same url are in the corresponding small file (a0 vs B0, A1 vs b1....a999 vs b999), does not correspond to small files (such as a0 vs B99) cannot have the same url. Then we only ask out 1000 the same url in small files. ,
For example, a0 vs B0, we can traverse a0, and hash_map. Then traverse b0, if url in hash_map, then this url in the Span lang= "en-us" >a and b exist simultaneously and are saved to a file. ,
If the small file is not evenly divided, resulting in some small files are too large (such as greater than 2g), you can consider these too large small files and similar methods into small files can be
Given a, b two files, each store 5 billion URLs, each URL occupies 64 bytes, memory limit is 4G, how to find a, b file common URL?