The website uses three diagrams to describe the shuffle process, map and reduce is our own program, so did not write in these three diagrams, today mainly around the three map we do a simple description and review:
The first picture, from the overall grasp of the process
The diagram above provides part of the entire process, which should have 4 maps, 3 reduce, only one map, one reduce
First question: When will partition do it?
Partition is always doing, no matter how many reduce tasks, the default is a reduce, so the default is a partition, the two are consistent in number, where reduce has three, so there are three partition. Now answer the question, partition what time to do, the above figure of the buffer in memory is a map processed data write cache a step, partition will do before this step steps. (partition may add a flag for the data, haha now not sure, but this understanding should not be ambiguous)
Second question: the overflow mechanism
Buffer in memory space is limited, so when the space reaches a certain threshold will be written to disk, press partition write, the above picture can see that partitions has three
Third question: Merge and sort
There will be a sort occurrence in the merge place, which of course will affect the efficiency problem, many times do not need to sort, the final stage of map will be all the small merge data into a larger file, this file is also partitioned.
Fourth question: Reduce phase, there will also be merge, sort
Reduce will issue a command to the specified datanode copy of the corresponding partition data and bring all the copy over to the data merge
Fifth question: The reduce phase also requires memory overflow
Sixth question: Group by occurs in the reduce result
Question seventh: Combline occurs in the map result to reduce the output of map key-value pairs
Eighth question: From map to reduce is used HTTP protocol, go network communication.
================================= Gorgeous split Line ==================
Light Blind