Start learning about Hadoop's popular database technology today. Get started directly from Hadoop's definitive guide 4th Edition, which is a Hadoop Bible. In the first chapter, the author writes about two methods of distributing database system in processing data segmentation: 1) According to a certain unit (such as year or value range), 2) divide all data evenly into several parts (number of distributed computers);
The possible problem with the first approach is that the size of the data blocks after the split is different, as in the case of the year, it is likely that the order of magnitude of the smallest and largest parts varies greatly. So the last of the biggest will be waiting for other tasks. The problem with the second approach is that it is more complex to implement than the first method, because uniform partitioning requires calculating the bounds of each block. Both methods need to find a place to do the final calculation after the child node is settled, although it may be a relatively small result set after the calculation of the child node, but where to summarize, which machine on the summary is a problem?
The second method obviously has an advantage over the first method. Assuming that the first method is in years, what if I am analyzing only the data for a single year, is it difficult to use only one machine? The second method is relatively flexible. But I think it is not omnipotent, the realization of the words is still very difficult, divided the rules are different, such as the operation of one is required
Hadoop->> about data split