When the client submits a task, the first Resourcemanger (RM) is dispatched to a container, which operates in Nodemanger (NM),
The client communicates directly with the NM in which the container is located, starting Applicationmaster (AM) in this container, which is fully responsible for the progress of the task, the reason for failure (  There is only one am in a job).
AM calculates the resources required for this task, then requests the resources from RM, obtains a set of container for the Map/reduce task to run, and, together with NM, performs some necessary tasks for each container, in the task execution
, am is constantly monitoring the progress of the task, and if a task fails on a container in the middle of a nm, am will re-find a node to run the task.
The process is as follows:
MRV2 Running Process:
MR Jobclient submits a job to ResourceManager (RM)
RM asks scheduler for a container to run for Mr Am, and then starts it
When MR am is started, it is registered with RM.
Mr Jobclient obtains information about Mr AM from RM, and then communicates directly with Mr AM
MR am calculates the splits and constructs a resource request for all maps
Mr AM is doing the necessary preparation for Mr Outputcommitter
MR am initiates a resource request to RM (Scheduler), obtains a set of container for the Map/reduce task to run, and, together with NM, performs some necessary tasks for each container, including resource localization, etc.
The MR AM monitors the running task until it finishes, and when the task fails, it requests a new container to run the failed task
When each map/reduce task is completed, Mr am runs the cleanup code of Mr Outputcommitter, which is to do some finishing work
When all map/reduce are complete, MR am runs the necessary job commit or abort APIs for Outputcommitter
 
MR am exits.
The work flow of mapreduce on yarn