Hadoop Source Code Analysis (internal classes and auxiliary classes for tasks)

Source: Internet
Author: User

from the previous diagram, we can see that the task has many inner classes and has a large number of class member variables that work with the task to accomplish the related tasks, such as.



The Mapoutputfile manages the mapper output file, which provides a series of get methods for obtaining the various files required by mapper, which are stored under a directory.
we assume that the Jobid of the incoming mapoutputfile is job_200707121733_0003,taskid for task_200707121733_0003_m_000005. The root of the mapoutputfile is
{mapred.local.dir}/tasktracker/jobcache/{jobid}/{taskid}/output
In the following discussion, we write the above path as {mapoutputfileroot}
taking the above Jogid and TaskID as an example, we have:
{mapred.local.dir}/tasktracker/jobcache/job_200707121733_0003/task_200707121733_0003_m_000005/output
Note that {Mapred.local.dir} can contain a series of paths, then Hadoop will find a directory that satisfies the requirements under these root paths, creating the required files. There are two ways to Mapoutputfile, with Forwrite and without Forwrite, with Forwrite for creating files, which requires a file size as a parameter for checking disk space. No forwrite is used to obtain the file to be established.
getoutputfile: File name is {mapoutputfileroot}/file.out;
getoutputindexfile: File name is {mapoutputfileroot}/file.out.index
getspillfile: File name is {mapoutputfileroot}/spill{spillnumber}.out
getspillindexfile: File name is {mapoutputfileroot}/spill{spillnumber}.out.index
the above four methods are used in the task subclass Maptask;
getinputfile: File name is {mapoutputfileroot}/map_{mapid}.out
used in Reducetask. We will introduce the corresponding application scenarios to where they are used.

after the temporary file management is introduced, We look at Task.combineoutputcollector, it inherits from Org.apache.hadoop.mapred.OutputCollector, very simple, just a outputcollector to Ifile.writer adapter , and let Ifile.writer do the work.

The Valuesiterator is used to obtain compliance rawcomparator from Rawkeyvalueiterator (Key,value are datainputbuffer,valuesiterator require that the input is already sorted) An iterator to the value of the <key>comparator. It has a simple subclass in the task, Combinevaluesiterator.

Task.taskreporter is used to submit counter reports and status reports to Jobtracker, which implements the Counter report reporter and Status report statusreporter. In order not to affect the work of the main thread, Taskreporter has a separate thread that, through the Taskumbilicalprotocol interface, uses the RPC mechanism of Hadoop to report task execution to Jobtracker.

Filesystemstatisticupdater is a simple tool class for recording the number of pairs/writes to the file system.

For more highlights, please follow: http://bbs.superwu.cn

Focus on the two-dimensional code of Superman Academy:

Hadoop Source Code Analysis (internal classes and auxiliary classes for tasks)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.