Time-consuming testing at all stages of Hadoop Maptask/reducetask

Source: Internet
Author: User

io.block.size:64m

mapred.mapinput.min.splitsize:512m

io.sort.mb:512m

Each maptask input is 512M of data, and in each Maptask, 3 spill cache overflows occur.

The following are the times that are spent in each of the breakdown stages that are counted through the log:


Each tasktracker uses a queue to save the task that Jobtracker distributes, and we take a task out of the queue as the time origin.

1. Hadoop first copies the task-related files (Job.split,job.xml and Job.jar) from HDFs to the Tasktracker local file system, using less than 1s of time. This is because Job.split is already on the local disk.

2. When all the required resources have been copied to the local, Hadoop initiates a taskrunner thread for this task, Taskrunner completes some initialization work, such as creating a temporary folder and so on, and finally Taskrunner initiates a child process. This phase is time consuming 2s.

3. Child process communication with the Tasktracker process to obtain the Jvmtask object required to run the task. This phase is time consuming 2s.

4. The child process begins with some initialization work, which takes 2s of time. Then start the real run Maptask.

5, the first map output buffer is written full, time consuming 52s, then the cache sequencing time consuming 2s, and then write the ordered data to a spill file, time-consuming 18s.

6, the second map output buffer is written full, time consuming 39s, then the cache sequencing time consuming 1s, and then write the ordered data to a spill file, time-consuming 14s.

7, the third time is a flush operation that writes the remaining data in the buffer to a spill file, which takes a total of 1s.

8, the final merge of 3 spill files, time less than 1s.

This task of Hadoop statistics takes a total of 134s, plus and equal to the time of this test, stating that the test is correct.


Reducetask can be divided into three steps:

Copy: At the end of the first map, copy begins, but until all of the maps have finished copy it is possible to end, so this phase has a long time statistic of 200s


Sort: Since there is very little data to be processed using the combiner,reduce phase, a reducetask here handles 242 records, about 50k in size, and 1s


reduce: less than 1s



Hadoop Job Tuning http://www.searchtb.com/2010/12/hadoop-job-tuning.html



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.