1. What is YARN?
From the industry's changing trends in the use of distributed systems and the long-term development of the Hadoop framework, the jobtracker/tasktracker mechanism of mapreduce requires large-scale adjustments to fix its flaws in scalability, memory consumption, threading models, reliability, and performance. Over the past few years, the Hadoop development team has done some bug fixes, but the cost of these fixes is getting higher, suggesting that changes to the original framework are becoming more difficult to make. To fundamentally address the performance bottlenecks of the old MapReduce framework, and to promote the longer-term development of the Hadoop framework, starting with the 0.23.0 release, Hadoop's MapReduce framework was completely refactored and changed radically. The new Hadoop MapReduce framework is named MapReduceV2 or Yarn.
Yarn is a newly introduced resource management system from the 0.23.0 release, which evolved directly from MR1 (0.20.x, 0.21.x, 0.22.x) and its core ideas:
Jobtracker resource management and job invocation two functions in MR1 are separated by ResourceManager and applicationmaster process respectively.
1) ResourceManager: Responsible for the resource management and scheduling of the whole cluster
2) Applicationmaster: Responsible for application related transactions, such as task scheduling, task monitoring and fault tolerance
2. Why use YARN?
Compared with the old MapReduce, YARN uses a layered cluster framework, which solves a series of defects in the old MapReduce and has the following advantages.
1. The HDFs Federation is presented, which allows multiple namenode to be in charge of different directories for access isolation and scale-out. For a single point of failure in a running NameNode, the NameNode hot standby scheme (NameNode HA) is implemented
2. Yarn by separating resource management and application management, respectively, by Resoucemanager and Applicationmaster, which, Resoucemanager dedicated resource management and scheduling, Applicationmaster is responsible for specific application-related task segmentation, task scheduling and fault tolerance, and each application corresponds to a applicationmaster
3. Yarn has backward compatibility, and the user runs on MRv1 and runs on yarn without any modifications.
4. The representation of the resource in memory (in the current version of yarn does not take into account the CPU footprint) is more reasonable than the number of slots remaining.
5. Support for multiple frameworks, yarn is no longer a simple computing framework, but a framework manager, users can be a variety of computing frameworks to port yarn, yarn for unified management and resource allocation. Multiple computing frameworks can now be run on yarn, such as MapReduce, Storm, Spark, Flink, etc.
6. Framework upgrade is easier, in yarn, the various computing frameworks are no longer deployed as a service to the various nodes of the cluster (such as the MapReduce framework, no longer need to deploy Jobtracler, Tasktracker and other services), Instead, it is encapsulated into a user library (LIB) that is stored on the client, and when it is necessary to upgrade the computing framework, simply upgrade the user library, and how easy it is!
3. What is the YARN architecture made up of?
first, let's look at YARN's architecture diagram, as shown in.
from the YARN architecture diagram, it is mainly composed of ResourceManager, NodeManager, Applicationmaster and container, and other components.
1, ResourceManager (RM)
The essence of YARN layered structure is ResourceManager. This entity controls the entire cluster and manages the allocation of the application to the underlying computing resources. ResourceManager arranges the various resource parts (compute, memory, bandwidth, etc.) to the base NodeManager (per-node agent of YARN). ResourceManager also allocates resources together with Applicationmaster to launch and monitor their underlying applications together with NodeManager. In this context, Applicationmaster assumed some of the roles of previous Tasktracker, ResourceManager assumed the role of Jobtracker.
In general, RM has the following functions
1) Handling client requests
2) Start or monitor Applicationmaster
3) Monitoring NodeManager
4) Allocation and scheduling of resources
2, Applicationmaster (AM)
Applicationmaster manages every instance of the application running within yarn. Applicationmaster is responsible for coordinating resources from ResourceManager and monitoring the execution of containers and resource usage (CPU, memory, etc.) through NodeManager. Note that while the current resources are more traditional (CPU cores, memory), the future will bring new resource types based on the task at hand (a specific processing unit or a dedicated processing device). From the YARN perspective, Applicationmaster is a user code, so there is a potential security issue. YARN assumes that applicationmaster are faulty or even malicious, so treat them as unprivileged code.
In general, AM has the following effects
1) responsible for the segmentation of data
2) Request resources for the application and assign to internal tasks
3) Task monitoring and fault tolerance
3, NodeManager (NM)
NodeManager manages each node in the yarn cluster. NodeManager provides services for each node in the cluster, from overseeing lifetime management of a container to monitoring resources and tracking node health. MRV1 manages the execution of Map and Reduce tasks through slots, while NodeManager manages abstract containers that represent resources for each node that can be used by a particular application.
In general, NM has the following effects
1) Managing resources on a single node
2) Handling commands from ResourceManager
3) Handling commands from Applicationmaster
4, Container
Container is a resource abstraction in YARN that encapsulates a multidimensional resource on a node, such as memory, CPU, disk, network, and so on, when AM is requesting resources from RM, the resource that RM returns for AM is represented by Container. Yarn assigns a container to each task, and the task can only use the resources described in the container.
In general, container has the following effects
1) Abstract The task run environment, encapsulate the CPU, memory and other multi-dimensional resources and environment variables, start the command and other tasks related to the operation of information
To use a YARN cluster, you first need a request from the customer that contains the application. ResourceManager negotiates the necessary resources for a container and initiates a applicationmaster to represent the submitted application. By using a resource request protocol, Applicationmaster negotiates the resource containers that each node uses for the application. When the application executes, the Applicationmaster monitors the container until it is complete. When the application finishes, Applicationmaster logs its container from ResourceManager, and the execution cycle is complete.
As explained above, it should be clear that the old Hadoop architecture is highly constrained by the Jobtracker, and Jobtracker is responsible for resource management and job scheduling for the entire cluster. The new YARN architecture breaks this model, allowing a new ResourceManager to manage resource usage across applications, Applicationmaster responsible for managing job execution. This change eliminates a bottleneck and improves the ability to extend the Hadoop cluster to a much larger configuration than before. In addition, unlike traditional mapreduce,yarn allows standard communication patterns such as MPI (Message passing Interface) to be used while performing a variety of programming models, including graphics processing, iterative processing, machine learning, and general cluster computing.
4. Yarn principle
YARN's job run, mainly consists of the following several steps
1. Job Submission
The client calls the Job.waitforcompletion method to submit a mapreduce job to the entire cluster (step 1th). The new Job ID (APP ID) is assigned by the Resource Manager (2nd step). The client of the job verifies the output of the job, calculates the input split, and copies the resource of the job (including the jar package, configuration file, split information) to HDFs (3rd step). Finally, the job is submitted by invoking the Resource Manager's Submitapplication () (4th step).
2. Job initialization
When the resource manager receives a request for submitapplciation (), the request is sent to the Scheduler (scheduler), the Scheduler assigns container, and the resource manager launches the Application Manager process within that container, monitored by the Node Manager (5th step ).
The application manager for the MapReduce job is a Java application where the main class is mrappmaster. By creating some bookkeeping objects to monitor the progress of the job, get the progress of the task and complete the report (6th step). It then obtains the input split (7th step) computed by the client through the Distributed File system. Then create a map task for each input split and create a reduce task object based on Mapreduce.job.reduces.
3. Task Assignment
If the job is small, application manager chooses to run the task in its own JVM.
If it is not a small job, Application manager requests container to the resource manager to run all the map and reduce tasks (8th step). These requests are transmitted through the heartbeat, including the location of the data for each map task, such as the host name and rack (rack) where the input split is stored. The scheduler uses this information to dispatch tasks, assign tasks to nodes that store data, or assign to and store nodes of the same rack as the nodes that input split.
4. Task Run
When a task is assigned to a container by the resource Manager's scheduler, Application Manager initiates container (9th step) by contacting the Node Manager. The task is performed by a Java application with a main class of Yarnchild. Before running the task, first localize the resources required for the task, such as the job configuration, the jar file, and all the files for the distributed cache (step 10th). Finally, run the map or reduce task (11th step).
The Yarnchild runs in a dedicated JVM, but yarn does not support JVM reuse.
5. Progress and status updates
The tasks in yarn return their progress and status (including counter) to the Application manager, and the client requests progress updates to the Application Manager (via the Mapreduce.client.progressmonitor.pollinterval settings) to the user every second.
6. Job completion
In addition to requesting job progress from the application manager, the client checks that the job is completed every 5 minutes by calling WaitForCompletion (). The interval can be set by Mapreduce.client.completion.pollinterval. After the job is completed, Application Manager and container clean up the working state, and Outputcommiter's job cleanup method is also called. Job information is stored by the job history server for later user verification.
If you think reading this blog gives you something to gain, you might want to click " recommend " in the lower right corner.
If you want to find my new blog more easily, click on " Follow me " in the lower left corner.
If you are interested in what my blog is talking about, please keep following my follow-up blog, I am " Liu Chao ★ljc".
This article is copyright to the author and the blog Park, Welcome to reprint, but without the consent of the author must retain this paragraph, and in the article page obvious location to the original link, otherwise reserves the right to pursue legal responsibility.
Initial mastery of yarn's architecture and principles