Description:
The following figure is from the computer department of Nanjing University Huang Yihua teacher to open the MapReduce course courseware, here a little collation and summary.
The purpose of this article is to contact the MapReduce, but the work flow to MapReduce is still not very clear to the people, of course, including bloggers themselves, want to learn with everyone. the principle of MapReduce
MapReduce draws on the idea in Lisp, a functional programming language, in which Lisp (list processing) is a sort of lists processing language that can handle the whole list of elements.
such as: (Add # (1 2 3 4) # (4 3 2 1) will produce results: # (5 5 5 5)
MapReduce is similar to Lisp because MapReduce is also a grouping of keys in the final reduce phase.
The following diagram is how MapReduce works.
1) First the data records of the document (such as lines in the text, or rows in the data table) are passed into the map function as "key-value pairs" , and then the map function processes the key-value pairs (such as statistical word frequency) and then outputs to the intermediate result.
2) before the key value pair enters the reduce processing, must wait until all the map functions are finished, so both to achieve this synchronization and improve the efficiency of operation , in the process of mapreduce in the middle of the introduction of barrier (synchronization barrier)
In charge of synchronization, complete the statistics of the intermediate results of the map, including A. Merge the value values of the same key for the same map node,B. Then send key-value pairs with the same key from different maps to the same reduce be processed .
3) in the reduce phase, each reduce node is given a key-value pair with the same key that is passed from all the map nodes. The reduce node merges these key values.
take the word frequency statistics as an example.
The word frequency statistic is to count the number of occurrences in all the text of a words, the case program in Hadoop is WordCount, commonly known as Hadoop programming "Hello World".
Because we have more than one text, we can count the number of words in each text in parallel, and then make the final totals.
So this can be a good embodiment of the map,reduce process.
It can be found that this figure is further refinement of the graph above, mainly reflected in:
1) Thecombiner node is responsible for merging the same key in the same map as mentioned above, avoiding repeated transfers, thus reducing communication overhead in transit.
2) Thepartitioner node is responsible for dividing the intermediate results generated by the map to ensure that the same key reaches the same reduce node.