Combine and partition are functions, the middle step should be only shuffle!
Combine is divided into map and reduce side, the function is to combine the key value pairs of the same key, can be customized.
The Combine function merges the <key,value> pairs (multiple Key,value) generated by a map function into a new <key2,value2> the new <key2,value2> As input into the reduce function
This value2 can also be called the values, because there are multiple. The purpose of this merger is to reduce network transmission.
partition is the result of dividing each node of the map, and it can be customized by mapping the key to different reduce, respectively. You can actually understand the classification here.
We classify the intricate data. For example, there are cows, goats, ducks and geese in the zoo, they are mixed together, but at night they will be their cattle back to the barn, sheep back to the fold, chickens back to the chicken coop. Partition's role is to classify these data. Only when writing a program, MapReduce uses hash Hashpartitioner to categorize us. We can also customize this.
Shuffle is the process between map and reduce, which contains both combine and partition at both ends.
The result of map will be distributed to reducer via partition, reducer after the reduce operation, through OutputFormat, output
The main function of the shuffle phase is fetchoutputs (), the function of which is to copy the map phase output to the reduce node locally.
The normal meaning of shuffle is shuffling or messing up, and Shuffle describes the process of data from the map task output to the reduce task input. Partition, which is divided, partitioned, categorized, is part of the shuffle.
What are the roles of Combine, partition, and shuffle in Hadoop?