Development History and detailed analysis of hadoop Yarn

Source: Internet
Author: User

Apache hadoop with mapreduce is the backbone of distributed data processing. With its unique physical cluster architecture for horizontal scaling and the fine-grained Processing Framework originally developed by Google, hadoop is experiencing explosive growth in new fields of big data processing. Hadoop also developed a diverse application ecosystem, including Apache pig (a powerful scripting language) and Apache hive (a data warehouse solution with similar SQL interfaces ).

Unfortunately, this ecosystem is built on a programming model and cannot solve all the problems in big data. Mapreduce provides a specific programming model. Although it has been simplified through tools such as pig and hive, it is not a panacea for big data. First, we will introduce mapreduce 2.0 (mrv2)-or yet another resource negotiator (yarn)-and quickly review the hadoop architecture before yarn.

Introduction to hadoop and mrv1

A hadoop cluster can be expanded from a single node (where all hadoop entities are running on the same node) to thousands of nodes (where functions are scattered among nodes, to add parallel processing activities ). Figure 1 shows the advanced components of a hadoop cluster.

Figure 1. simple demonstration of hadoop cluster architecture

A hadoop cluster can be divided into two abstract entities: mapreduce engine and distributed file system. The mapreduce engine can execute map and reduce tasks on the entire cluster and report results. The Distributed File System provides a storage mode that can copy data across nodes for processing. Hadoop Distributed File System (HDFS) supports large files by definition (each file is usually a multiple of 64 MB ).

When a client sends a request to a hadoop cluster, this request is managed by jobtracker. Jobtracker and namenode work together to distribute the work to the nearest possible location of the data it processes. Namenode is the main system of the file system. It provides metadata services for data distribution and replication. Jobtracker arranges map and reduce tasks to available slots on one or more tasktrackers. Tasktracker and datanode (Distributed File System) execute map and reduce tasks on data from datanode. When map and reduce tasks are completed, tasktracker informs jobtracker that the latter determines when all tasks are completed and finally informs the customer that the jobs have been completed.

As shown in figure 1, mrv1 implements a relatively simple Cluster Manager for mapreduce processing. Mrv1 provides a hierarchical cluster management mode. Big Data jobs infiltrate into a cluster in the form of a single map and reduce task, and finally aggregate them into jobs to report to users. However, this simplicity is a bit hidden, but it is not a very hidden issue.

Mrv1 Defects

The first version of mapreduce has both advantages and disadvantages. Mrv1 is a standard big data processing system currently used. However, this architecture is insufficient, mainly manifested in large clusters. When the cluster contains more than 4,000 nodes (each node may be multi-core), it may be unpredictable. One of the biggest problems is cascading failure. To try to copy data and reload active nodes, a failure will cause serious deterioration of the entire cluster through network flooding.

But the biggest problem with mrv1 is multi-tenant. As the cluster size increases, a desirable method is to adopt different models for these clusters. Mrv1 nodes are dedicated to hadoop, so you can change their usage for other applications and workloads. When big data and hadoop become a more important usage model in cloud deployment, this capability will also be enhanced because it allows the physical implementation of hadoop on the server, without virtualization and increasing management, computing, and input/output overhead.

Now let's look at the new architecture of yarn to see how it supports mrv2 and other applications that use different processing models.

Introduction to yarn (mrv2)

To achieve cluster sharing, scalability, and reliability of a hadoop cluster. The designer adopted a hierarchical cluster framework method. Specifically, mapreduce-specific functions have been replaced with a new set of daemon, which is open to the new processing model.

Recall that the mrv1 jobtracker and tasktracker methods were important defects due to limited scalability and network overhead. These daemon are also unique to the mapreduce processing model. To eliminate this restriction, jobtracker and tasktracker have been removed from yarn and replaced by a group of new daemon that are unknown to the application.

Figure 2. New yarn Architecture

The essence of the yarn hierarchy is ResourceManager. This entity controls the entire cluster and manages the allocation of applications to basic computing resources. ResourceManager carefully arranges various resource components (such as computing, memory, and bandwidth) to the basic nodemanager (each node proxy of yarn ). ResourceManager also allocates resources with applicationmaster to start and monitor their basic applications together with nodemanager. In this context, applicationmaster assumes some of the previous tasktracker roles, and ResourceManager assumes the jobtracker role.

Applicationmaster manages each instance of an application running in yarn. The applicationmaster is responsible for coordinating resources from ResourceManager and monitoring container execution and resource usage (Resource Allocation such as CPU and memory) through nodemanager ). Please note that, although the current resources are more traditional (CPU core, memory), there will be new resource types based on the task at hand (such as specific processing units or dedicated processing devices) in the future ). From the perspective of yarn, applicationmaster is a user code, so there are potential security problems. Yarn assumes that the applicationmaster has errors or is even malicious, so it treats them as unprivileged code.

Nodemanager manages each node in a yarn cluster. Nodemanager provides services for each node in the cluster, From supervising the lifetime management of a container to monitoring resources and tracking node health. Mrv1 manages the execution of MAP and reduce tasks through slots, while nodemanager manages abstract containers, which represent resources available for a specific application for each node. Yarn continues to use the HDFS layer. Its primary namenode is used for metadata services, while datanode is used for replication and storage services distributed in a cluster.

To use a yarn cluster, you first need requests from customers who contain an application. ResourceManager negotiates necessary resources of a container and starts an applicationmaster to indicate submitted applications. By using a resource request protocol, applicationmaster negotiates the resource containers on each node for the application. When the application is executed, applicationmaster monitors the container until it is complete. When the application is completed, applicationmaster logs out of its container from ResourceManager and the execution cycle is complete.

Through these discussions, it should be clear that the old hadoop architecture is highly constrained by jobtracker, which is responsible for the resource management and Job Scheduling of the entire cluster. The new yarn architecture breaks this model and allows a new ResourceManager to manage resource usage across applications. applicationmaster is responsible for managing job execution. This change eliminates a bottleneck and improves the ability to extend a hadoop cluster to a much larger configuration than before. In addition, unlike traditional mapreduce, yarn allows the use of standard communication modes such as message passing interface while executing different programming models, it includes graphic processing, iterative processing, machine learning, and general cluster computing.

What you need to know

With the emergence of yarn, you can create more complex distributed applications instead of being constrained by simpler mapreduce development modes. In fact, you can regard the mapreduce model as one of the applications that can run in the yarn architecture. It only discloses more functions of the basic framework for custom development. This capability is very powerful, because the yarn usage model is almost unlimited, and it no longer needs to be isolated from other more complex distributed application frameworks on a cluster, just like mrv1. It can even be said that as yarn becomes more robust, it has the ability to replace some other distributed processing frameworks, completely eliminating resource overhead dedicated to other frameworks and simplifying the entire system.

To demonstrate how yarn is more efficient than mrv1, you can consider the parallel problem of the earlier version of LAN Manager hash in brute force testing. Is this an earlier version of Windows? A typical method used for password hashing. In this scenario, the mapreduce method does not make much sense, because the mapping/compaction phase involves too much overhead. On the contrary, a more reasonable way is to abstract the assignment of jobs so that each container has a part of the password search space, enumerate it on it, and notify you whether the correct password is found. The key here is that the password will be dynamically identified through a function (which is tricky indeed), without the need to map all possibilities to a data structure, this makes the mapreduce style unnecessary and not practical.

In conclusion, the mrv1 framework only requires an associated array, and these problems have a tendency to evolve in the direction of big data operations. However, the problem will not always be confined to this paradigm, because now you can abstract them more simply and write the custom client and application main program, and any application that you want to design.

Develop yarn applications

With the powerful new features provided by yarn and the ability to build a custom application framework on hadoop, you will also face new complexity. Building applications for yarn is much more complex than building traditional mapreduce applications on hadoop before yarn, because you need to develop an applicationmaster, which is the ResourceManager started when client requests arrive. Applicationmaster has multiple requirements, including implementing required protocols to communicate with ResourceManager (used to request resources) and nodemanager (used to allocate containers ). For existing mapreduce users, mapreduce applicationmaster can minimize any new work required, so that the workload required to deploy mapreduce jobs is similar to that of hadoop before yarn.

In many cases, the lifecycle of an application in yarn is similar to that in mrv1. Yarn allocates many resources in a cluster, performs processing, discloses the contact points used to monitor the application progress, and finally releases the resources when the application is completed and performs general cleaning. A sample implementation of this lifecycle can be obtained in a project named kitten (see references ). Kitten is a set of tools and code that simplify application development in yarn, so that you can focus on the logic of the application, at first, we ignored the details about the limitations of various entities in the yarn cluster. However, if you want to study it more deeply, kitten provides a set of services that can be used to process interactions with other cluster entities (such as ResourceManager. Kitten provides its own applicationmaster, which is suitable for use only as an example. Kitten uses a lot of Lua scripts as its configuration service.

The next plan, although hadoop continues to grow in the big data market, has begun an evolution to address the massive data workloads to be defined. Yarn is still developing actively and may not be suitable for the production environment. However, yarn provides an important advantage over traditional mapreduce. It allows the development of new distributed applications outside of mapreduce and allows them to coexist in the same cluster at the same time. Yarn is built on the existing elements of the current hadoop cluster, but also improves elements such as jobtracker, which can improve scalability and enhance the ability of many different applications to share clusters. Yarn will soon come to your hadoop cluster, bringing new features and new complexity.

Development History and detailed analysis of hadoop Yarn

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.