The mapper and reducer of Hadoop-2.4.1 learning

Source: Internet
Author: User
Tags shuffle

MapReduce allows programmers to easily write programs that run large amounts of data on large clusters in parallel, ensuring that the program runs reliably and with fault-tolerant processing power. Programmers write applications running on MapReduce called Jobs, and Hadoop supports jobs written in Java and other languages, such as Hadoop streaming (shell, Python), and Hadoop Pipes (c + +). hadoop-2.x no longer retains the jobtracker and Tasktracker components in the hadoop-1.x version, but this does not mean that hadoop-2.x no longer supports mapreduce jobs, but instead hadoop-2.x through the only primary ResourceManager , one per node from NodeManager and one mrappmaster per application retains backward compatibility with mapreduce jobs. In the new version, the MapReduce job is still composed of the map and reduce tasks, and the map still receives the input data being split into chunks by the MapReduce framework, and the map task processes the blocks in a completely parallel manner. Then the MapReduce framework sorts the output of the map task and takes the result as input to the reduce task, and finally the result is output by the reduce task, and the MapReduce framework is responsible for the task scheduling, monitoring and re-executing the failed task during the whole execution.

Usually the compute nodes and storage nodes are the same, and the MapReduce framework effectively schedules the tasks on the nodes where the data is stored, helping to reduce the amount of bandwidth used when transferring data. The MapReduce application provides the map and reduce functions by implementing or inheriting the appropriate interface or class, both of which are responsible for the map task and the reduce task. The job client submits the well-written job to ResourceManager, instead of Jobtracker,resourcemanager, which distributes the job to the slave node, dispatches and monitors the job, and provides status and diagnostic information for the job client.

The MapReduce framework handles only <key, value> key-value pairs, that is, the input of the job as some key-value pairs and output key-value pairs. A class that is a key value must be serialized by the MapReduce framework, so the writable interface needs to be implemented, and the commonly used intwritable,longwritable and text are classes that implement the interface. As a key class to implement the writable interface, but also to implement the Writablecomparable interface, the implementation of the interface is mainly to facilitate sequencing, the above mentioned three classes also implemented the interface.

After a brief introduction to the MapReduce framework, there are two important concepts in the framework: Mapper and Reducer, which, as mentioned above, form the MapReduce job and are responsible for the actual business logic processing.

Mapper is a standalone task that converts input records into intermediate records, which are processed for input key-value pairs and output as a set of intermediate key-value pairs, and the output key-value pairs are collected using the Context.write (writablecomparable, writable) method. The key value type of the intermediate record does not have to be the same as the key value type of the input record, and is often different. An input record may be output as 0 or more intermediate records through mapper processing. For example, if the input record does not meet the business requirements (not including a specific value or contains a specific value), you can return directly, then output 0 records, at this time mapper the role of the filter.

The MapReduce framework then groups all the intermediate values associated with the given key and passes it to reducer. The Job.setgroupingcomparatorclass (Class) method allows the user to specify comparator to control the grouping. The output of the mapper is sorted and then according to the reducer partition, the total number of partitions is the same as the number of reducer tasks initiated by the job, and the programmer can implement a custom Partitioner control output by which reducer processes the records. By default, Hashpartitioner is used. Programmers can also specify a combiner with Job.setcombinerclass (Class) to perform local aggregation of intermediate output, which helps reduce mapper to reducer data transfer. The intermediate output of the mapper is sorted and always saved in the format (Key-len, Key,value-len, value), and the application can control whether the intermediate output is compressed by the configuration and what compression is used. The relevant parameters are: Mapreduce.map.output.compress, Mapreduce.map.output.compress.codec. The programmer passes the mapper to the Job,mapreduce framework by Job.setmapperclass (Class) to call Mapper's map (writablecomparable, writable, Context) To handle the value of the task, the application can override the Cleanup (Context) method to implement any cleanup work that is required.

The MapReduce framework launches a map task for each inputsplit generated by the inputformat of the job, so the total number of map tasks is determined by the size of the input data and, more accurately, by the total number of blocks in the input file. Although you can set up 300 map tasks on a node for less CPU-using map tasks, each node is better suited to run 10-100 map tasks in parallel. Because the task is started to take some time, it is best to run the task at least 1 minutes, because if the task is running very little time, the entire job will be spent most of the time on the creation of the task.

Reducer reduces the set of intermediate values that have the same key to a smaller set of values, such as the number of merged words. The number of reducer that a job starts can be job.setnumreducetasks (int) or the parameter mapreduce.job.reduces setting in Mapred-site.xml, but it is more recommended because the programmer can decide how many reducer to start, and the latter is more of a default value. Programmers use Job.setreducerclass (Class) to submit reducer to the job, the MapReduce framework for each pair of <key, (list of values) > Call reduce ( Writablecomparable, Iterable<writable>, and Context) methods, like Mapper, programmers can override the cleanup (Context) method to specify the cleanup work they need.

The reducer process consists of three stages: Shuffle (Shuffle), sort (classification), and reduce. In the shuffle phase, the MapReduce framework obtains all the relevant partitions of the mapper output via HTTP. In the sort phase, the framework groups the input of the reducer according to the key (different mapper may output the same key). Shuffle and sort are done at the same time, get the output of mapper and then merge them. In the reduce phase, call reduce (writablecomparable, iterable<writable> processing <key, (List of values) > pair. The output of the reducer is typically written to the file system via Context.write (writablecomparable,writable), such as HDFs, or, of course, by using Dboutputformat to write the output to the database. The output of the reducer is unsorted.

If you do not need reducer, you can use Job.setnumreducetasks (int) To set the number of reducer to 0 (if you do not use this method to set the number of reducer, Since mapreduce.job.reduces defaults to 1, a reducer is started, in which case the output of mapper is written directly to the path specified by Fileoutputformat.setoutputpath (Job,path). And the MapReduce framework does not sort the output of the mapper.

You can specify different comparator by Job.setsortcomparatorclass (Class) If you want to use a comparison rule that differs from the middle key in the group before you make the reduce. That is, Job.setgroupingcomparatorclass (Class) controls how the intermediate output is grouped, while the Job.setsortcomparatorclass (class) Controls the second grouping that occurs before data is passed into reduce.

Unlike the number of mapper is determined by the size of the input file, the number of reducer can be explicitly set by the programmer, then how much reducer can be set to achieve better results? The number of reducer is: (0.95 ~1.75) * Number of nodes * Maximum number of containers per node. The parameter YARN.SCHEDULER.MINIMUM-ALLOCATION-MB sets the minimum memory that each container can request, so the maximum number of containers can be computed by dividing the total memory by that parameter. When 0.75 is used, all reducer are loaded immediately and the output of mapper is transferred when mapper is complete. With 1.75, the faster nodes complete their first round of tasks, then load the second wave task, which has a better effect on load balancing. Increasing the number of reducer increases the frame overhead, but increases load balancing and reduces the cost of failure. The scale factor above is slightly smaller than the total number of reducer, leaving a small amount of reducer slots for the predicted tasks and failed tasks, that is, the actual number of reducer as the number of the above formula plus the number of reducer reserved.

Hadoop-2.4.1 Learning Mapper and reducer

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.