Some of the key points that I've summed up in yarn

Source: Internet
Author: User

Previously in Hadoop 1.0, Jobtracker has done two main functions: resource management and Job control. In a scenario where the cluster size is too large, jobtracker
The following deficiencies exist:
1) Jobtracker single point of failure.
2) The jobtracker is subjected to great access pressure, which affects the expansibility of the system.
3) Calculation frameworks outside of MapReduce are not supported, such as Storm, Spa RK, Flink

Therefore, in the design of yarn, the management of resources and operation control are separated. The replacement of Jobtracker is a two-part ResourceManager and Applicationmaster.

Resource Manager is a global resource manager that does the job of scheduling, starting the applicationmaster that each of the jobs belongs to, and also monitoring the presence of applicationmaster. Note: RM is only responsible for monitoring am, starting it when am fails, RM is not responsible for fault tolerance of AM internal task, this is done by AM. (This is done through the Applicationmanager in RM)
Applicationmaster is a part of every job (not every one), Applicationmaster can run on a machine other than ResourceManager, Each application corresponds to a applicationmaster.
NodeManager is the ResourceManager agent on each node, responsible for maintaining the Container state, and keeping the heartbeat to RM.
In addition, yarn abstracts resources using container, which encapsulates a certain amount of resources on a node (now yarn only supports both CPU and memory resources). When am requests a resource from RM, RM represents the resource used for AM return using container. Yarn assigns one or more container to each task, and the task can only use the resources described in the container. (Note: AM is also running in a container), can now support a variety of computing frameworks running on yarn, such as MapReduce, Storm, Spark, Flink.

Explain the container.
Container is a resource abstraction in YARN that encapsulates a multidimensional resource on a node, such as memory, CPU, disk, network, and so on, when AM is requesting resources from RM, the resource that RM returns for AM is represented by Container. Yarn assigns a container to each task, and the task can only use the resources described in the container.

To use a YARN cluster, you first need a request from a customer that contains an application.

The advantages of yarn design
Separation of resource management and job control to reduce jobtracker pressure
The 0YARN design greatly reduces the resource consumption of Jobtracker (now ResourceManager) and makes it more secure and beautiful to distribute the programs that monitor the status of each job subtask (tasks).
0 The old frame, jobtracker a big burden is to monitor the job of tasks under the health, and now, This part is thrown to Applicationmaster. And ResourceManager has a module called Applicationsmanager (ASM), which is responsible for monitoring the health of the Applicationmaster.
Ability to support different computing frameworks

Working principle



MapReduce on yarn



The insufficiency and prospect of ARN
Yarn is a two-tier scheduler (Two-level Scheduler) that solves the deficiencies of the Central Scheduler (Monolithic Scheduler) (the typical representative of the Central Scheduler is Jobtracker), The two-tier scheduling architecture appears to add flexibility and concurrency to scheduling, but in practice its conservative resource visibility and locking algorithms (using pessimistic concurrency) also limit flexibility and concurrency. First, the conservative resource visibility leads to the inability of the frameworks to perceive the resource usage of the entire cluster, the inability of idle resources to notify the queued process, and the waste of resources; second, the locking algorithm reduces concurrency, and the scheduler allocates resources to a schema, and only after the schema returns resources. The scheduler does not return this part of the resource to other schemas, which in the first allocation process is equivalent to being locked, thereby reducing concurrency. In summary, the disadvantages of yarn with other two-tier architecture schedulers (for example: Mesos) are:
Each application cannot perceive the overall resource usage of the cluster and can only wait for the upper level to dispatch push information.
Resource allocation uses a polling, resourceoffer mechanism (MESOS), which uses pessimistic locks during the allocation process, and has a small granularity of concurrency.
Lack of an effective mechanism for competition or preemptive preemption.
In order to improve the shortcomings of the two-tier scheduling system, especially the application can not perceive the use of the whole cluster resources and pessimistic lock control caused by the concurrency is not high, shared State Scheduler (Scheduler) is more and more people pay attention to, One of the most iconic is Google's Omega. The shared state scheduler has been improved on the basis of a two-tier scheduler:
Simplifies the global resource manager in a two-tier scheduler, and instead uses a cell state to record resource usage within the cluster, which are shared data to achieve the same effect as the global explorer.
Optimistic concurrency control is used when all tasks access shared data.
There is also a shortage of shared schedulers. For example, when a resource is accessed at the same time by different tasks, it is prone to conflict, the more tasks you access, the more conflicts you have, and the higher the number of collisions, the faster the scheduler's performance degrades, which affects the scheduler's productivity and performance.

Some of the key points that I've summed up in yarn

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.