From the previous diagram, we can see that the task has many inner classes and has a large number of class member variables that work with the task to accomplish the related tasks, such as.
The Mapoutputfile manages the mapper output file, which provides a series of get methods for obtaining the various files required by mapper, which are stored under a directory.
We assume that the Jobid of the incoming mapoutputfile is job_200707121733_0003,taskid for task_200707121733_0003_m_000005. The root of the mapoutputfile is
{Mapred.local.dir}/tasktracker/jobcache/{jobid}/{taskid}/output
In the following discussion, we write the above path as {mapoutputfileroot}
Taking the above Jogid and TaskID as an example, we have:
{Mapred.local.dir}/tasktracker/jobcache/job_200707121733_0003/task_200707121733_0003_m_000005/output
Note that {Mapred.local.dir} can contain a series of paths, then Hadoop will find a directory that satisfies the requirements under these root paths, creating the required files. There are two ways to Mapoutputfile, with Forwrite and without Forwrite, with Forwrite for creating files, which requires a file size as a parameter for checking disk space. No forwrite is used to obtain the file to be established.
Getoutputfile: File name is {mapoutputfileroot}/file.out;
Getoutputindexfile: File name is {Mapoutputfileroot}/file.out.index
Getspillfile: File name is {mapoutputfileroot}/spill{spillnumber}.out
Getspillindexfile: File name is {Mapoutputfileroot}/spill{spillnumber}.out.index
The above four methods are used in the task subclass Maptask;
Getinputfile: File name is {mapoutputfileroot}/map_{mapid}.out
Used in Reducetask. We will introduce the corresponding application scenarios to where they are used.
After the temporary file management is introduced, We look at Task.combineoutputcollector, it inherits from Org.apache.hadoop.mapred.OutputCollector, very simple, just a outputcollector to Ifile.writer adapter , and let Ifile.writer do the work.
The valuesiterator is used to obtain compliance rawcomparator< from Rawkeyvalueiterator (Key,value are datainputbuffer,valuesiterator require that the input is already sorted) An iterator to the value of the Key>comparator. It has a simple subclass in the task, Combinevaluesiterator.
Task.taskreporter is used to submit counter reports and status reports to Jobtracker, which implements the Counter report reporter and Status report statusreporter. In order not to affect the work of the main thread, Taskreporter has a separate thread that, through the Taskumbilicalprotocol interface, uses the RPC mechanism of Hadoop to report task execution to Jobtracker.
Filesystemstatisticupdater is a simple tool class for recording the number of pairs/writes to the file system.
Sweep the Superman Academy QR Code:
Hadoop Source Code Analysis (internal classes and auxiliary classes for tasks)