Analysis of jobtracker restart job recovery process
1. Related configuration items of job recovery
Configuration item |
Default Value |
Description |
Mapred. jobtracker. Restart. Recover |
False |
If the value is true, the job running before JT restart can be restored after jobtracker restart. If the value is false, the job needs to be re-run. |
Mapred. jobtracker. Job. History. Block. Size |
3145728 |
The size of the historical job log file that is saved, and the historical logs are used for job recovery. |
Hadoop. Job. History. Location |
$ {Hadoop. log. dir}/History |
Job history storage location |
2. How to restore a job
1) log file: restore a running job after jobtracker restart. First, parse the job log file and restore the running status of some tasks of the job through the job log file.
2)After JT restart, when TT connects to JT again, report to JT about their running tasks.
3)After JT is restarted, re-schedule the unrecovered tasks and unexecuted tasks in 1) and 2.
3. Job recovery related classes
The job recover process is mainly managed through the jobtracker. recoverymanager class. The jobhistory class is used to record and parse job logs.
4. detailed process of job recover
1) After jobtracker restart, check whether there is a job to be restored, which is implemented through recoverymanager. checkandaddjob.
2) If there is a job to be restored, start recovermanager. Recover () in offerservice () to start restoring the job:
(1) initialize jobs and cache the jobid of the job and the path of the job history log file according to the "mapred. system. dir" directory file information.
(2) restore job in the log file: Obtain the recovery Log Based on the path of the job recovery log, first through jobhistory. parsehistoryfromfs (string path, listener L, filesystem FS) parses the job log file and extracts each line of information from the log file (path refers to the path of the log file and listener is a listener ), then through jobrecoverylistener. handle () to process each line of log information to restore the job running status.
3) After all job log files are parsed and restored, when tasktracker reconnects to JT, TT will report to JT the status of the tasks they run above. The implementation process is as follows (in jobtracker. updatetaskstatuses (tasktrackerstatus status ):
If (tip! = NULL | hasrestarted ()){
If (TIP = NULL ){
Tip = job. gettaskinprogress (taskid. gettaskid ());
Job. addrunningtasktotip (TIP, taskid, status, false );
}
}
5. Conclusion: Although hadoop provides the job recover process after jobtracker restart and recently works on jobtracker ha, many problems still exist in this process.
1) This may cause problems when job logs are cached and flushed into log files in an emergency:
(1) When the job resumes running after jobtracker restart, the task will lose connection:
17:11:49, 224 info org. Apache. hadoop. mapred. jobtracker: attempt_201202161703_0002_m_000039_0 is 200024 MS debug.
Attempt_201202161703_0002_m_000039_0 is 400036 MS debug.
Attempt_201202161703_0002_m_000039_0 is 600047 MS debug.
Launching task attempt_201202161703_0002_m_1_39_0 timed out.
In this way, the task will be re-run after nine minutes to determine the timeout of the task.
Cause: Because the log is not timely enough to be flushed into the log file, the log file records the attempt task information of a task at startup, the information at the end of the task is not recorded (in fact, the task has been successfully run). When the job is restored, JT mistakenly believes that the task is still running, wait for TT to report the task information to it until the task timeout. (The task is completed before the JT restart. After all the JT restarts, TT will not report the status of the task)
(2) In a small job, all tasks are running, and the job log has not been flushed into the log file. When we restart JT, when the job is re-run, cleanup job is completed before the setup job is complete, so that JT always thinks that the job is still running.
(3) re-run the job after JT restart. The reduce task of the job is completed before the map task.
2) After JT restart, TT connects to JT again. When TT reports the status to JT, it does not consider tasks in the cleanup task (killed/failed uncleanup task) status during JT processing.
If (tip! = NULL | hasrestarted ()){
If (TIP = NULL ){
Tip = job. gettaskinprogress (taskid. gettaskid ());
Job. addrunningtasktotip (TIP, taskid, status, false );
}
}
As shown above, JT will perform the same processing for all tasks: Job. addrunningtasktotip (TIP, taskid, status, false). If a task is a cleanup task (killed/failed uncleanup task), JT runs it as a normal task, the cleanup task (killed/failed uncleanup task) adopts the same attempt ID as before running the task. In the end, when JT is scheduled, two identical attempts exist for one task.
ID, which is always in the holding state (different from the HADOOP-5394 ).
3) solution:
(1) to solve the problem of untimely log refresh, you can add an appender for log4j to support periodic log refresh to the disk.
(2) to solve the problem above 2), you can go to if (tip! = NULL | hasrestarted () is added with the processing method when the task is in the cleanup task (killed/failed uncleanup task) state.