MapReduce is a programming model for parallel operations of large-scale data. "Map", "Reduce" is their main idea. The user maps a set of key-value pairs to another set of key-value pairs by using the map function, specifying the concurrent reduce (induction) function to ensure that each of the mapped key-value pairs shares a common set of keys.
Working principle:
Such as:
The diagram on the right is the flowchart given in the paper. Everything starts at the top of the user program, and the user program links the MapReduce library and implements the most basic map and reduce functions. The order of execution in the figure is marked with a number.
The 1.MapReduce library first divides the input file of user program into m (user-defined), each usually has 16MB to 64MB, the left side is divided into split0~4, and then uses fork to copy the user process to the other machines in the cluster. A copy of the 2.user program is called Master, and the remainder is called Worker,master, which is responsible for scheduling, assigning jobs (map jobs or reduce jobs) to idle workers, and the number of worker can also be specified by the user. 3. The worker assigned the map job begins to read the input data for the corresponding Shard, the number of map jobs is determined by M, and the split one by one corresponds; the map job extracts key-value pairs from the input data, and each key-value pair is passed as a parameter to the map function. The intermediate key-value pairs produced by the map function are cached in memory. 4. The middle key value pairs of the cache are periodically written to the local disk, and are divided into r regions, the size of R is user-defined, in the future each area will correspond to a reduce job; the location of these intermediate key-value pairs is communicated to master,master responsible for forwarding the information to the reduce worker. The 5.MASTER notification assigns the worker of the reduce job where the partition it is responsible for (certainly more than one place, the intermediate key-value pairs produced by each map job may map to all r different partitions), and when the reduce worker reads all of the intermediate key-value pairs that it is responsible for. , they are sorted so that key-value pairs of the same key are clustered together. Sorting is necessary because different keys may map to the same partition, which is the same reduce job (who makes the partition less). 6.reduce workers traverse sorted intermediate key-value pairs, and for each unique key, the key is passed to the reduce function with the associated value, and the output from the reduce function is added to the output file of the partition. 7. When all the map and reduce jobs are completed, the master Wakeup genuine user Program,mapreduce function call returns the code of the user program. After all execution, the MapReduce output is placed in the output file of the R partition (one for each reduce job). Users usually do not need to merge the R files, but instead give them as input to another MapReduce program. Throughout the process, the input data comes from the underlying Distributed File System (GFS), where the intermediate data is placed on the local file system, and the final output data is written to the underlying Distributed File System (GFS). And we need to be aware of the difference between the map/reduce job and the Map/reduce function: The map job processes a shard of input data that may requireCall multiple map functions to handle each input key-value pair; the reduce job processes a partition's middle key-value pair, during which the reduce function is called once for each different key, and the reduce job eventually corresponds to an output file.
MapReduce Learning Notes