The principles and workflow of Hadoop yarn

Source: Internet
Author: User

Previously written mapreduce principle and workflow, including a small number of yarn content, because yarn is originally from MRV1, so the two are inextricably linked, in addition, as a novice also in the carding stage, so the content of the record will be more or less confusing or inaccurate, And please forgive us. The structure is as follows: first, briefly introduce the resource management in Mrv1, and then outline the yarn.

MRV1 This phase of resource management and calculation are done by MapReduce alone.

The implementation phase includes: Map sort/shuffle/merge and reduce phases to implement the MapReduce framework responsible for managing cluster resources, scheduling thousands of jobs, and other things.

Includes several components: Job tracker Tesktracker The task of managing the corresponding node. TT communicates with JT and is controlled by JT. JT is responsible for resource management of the work server nodes, tracking resource usage/availability, lifecycle, scheduling tasks, tracking progress, and providing disaster-recovery services to the task. TT starts the purge task according to JT's commands and provides JT with status information for the task. 、

However, the need for non-Mr Applications is not supported, scalability is not high, resource utilization is not high (map reduce node is not interchangeable) and other issues led to the emergence of YARN two YARN

Yarn's main idea: the two major functions of JT, resource management, job scheduling/monitoring are divided into two separate processes: Resource manager; Applicationmaster (corresponds to each application) where RM and NodeManager on each node comprise a new, general-purpose operating system that manages applications in a distributed manner. RM has the right to determine the allocation of resources for all applications, which corresponds to the applicatinmaster of each application to negotiate resources with RM and to work with NodeManager to perform and monitor each task.

RM has a component: Scheduler (pluggable), responsible for the application allocation team, the pure scheduler, not responsible for monitoring and status tracking, also does not guarantee the failure of task restart, the use of resource container abstract concept, NodeManager is responsible for starting each of the departments, but corresponding to an application of the container, monitoring CPU, memory and other resource usage, and report to RM

Each application is responsible for negotiating with the scheduler the appropriate amount of container, tracking the status of the application, monitoring their progress, essentially: an application can request specific resources to meet its resource requirements through Applicationmaster, am and RM negotiation , the scheduler assigns a container to the corresponding resource requirements, am must take the container, and give nodemanager,nm to use the appropriate resources to start the container task process.

Let's write this down.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.