New generation Big Data processing engine Apache Flink

Source: Internet
Author: User
Tags apache flink

Https://www.ibm.com/developerworks/cn/opensource/os-cn-apache-flink/index.html

Development of the Big Data computing engine

With the rapid development of big data in recent years, there have been many popular open source communities, including Hadoop, Storm, and later Spark, all with their own dedicated application scenarios. Spark opened the memory calculation of the precedent, also in memory as a bet, won the rapid development of memory computing. Spark's fiery heat is more or less obscured by other distributed computing systems. Like Flink, also at this time silently development.

In some foreign communities, many people divide the computing engine of big data into 4 generations, and of course, many people will not agree. Let's think about it and discuss it first.

The first generation of computing engines is undoubtedly the MapReduce hosted by Hadoop. Everyone here should not be unfamiliar with MapReduce, it will be calculated into two stages, respectively, Map and Reduce. For the upper-level applications, we have to find ways to split the algorithm, and even have to implement multiple Job concatenation in the upper application to complete a complete algorithm, such as iterative computation.

As a result of this disadvantage, the support DAG framework is spawned. Therefore, a DAG-enabled framework is divided into a second generation computing engine. such as the Tez and the higher Oozie. We're not going to scrutiny the difference between the various DAG implementations, but for the Tez and Oozie at the time, most of them were batch-processing tasks.

The next is the third generation of compute engines, represented by Spark. The third generation computing engine is characterized primarily by the DAG support within the job (not spanning the job), as well as the emphasis on real-time computing. In this case, many people will also think that the third generation computing engine will be able to run the batch job well.

The advent of the third generation computing engine has facilitated the rapid development of upper-level applications, such as the performance of various iterative computations and support for convection computing and SQL. The birth of Flink was attributed to the fourth generation. This should be manifested primarily in the support of Flink convection calculations, as well as in the more real-time nature of the above. Of course Flink can also support the task of Batch, as well as the operation of the DAG.

Perhaps some people disagree with the above classification, I think it is not important, it is important to understand the differences between the various frameworks, and more suitable for the scene. and understand that no framework can perfectly support all of the scenes, nor can any one frame completely replace the other, just as spark did not completely replace Hadoop, and of course Flink could not replace Spark. This article is devoted to describing the principles and applications of Flink.

Flink Introduction

Many people may have heard the word Flink in 2015 years, in fact, as early as 2008, Flink's predecessor was a research project at the Berlin Polytechnic University, accepted by the Apache incubator in 2014, and quickly became ASF (Apache software Fo Undation) One of the top projects. The latest version of Flink has been updated to 0.10.0, many people feel the rapid development of Spark at the same time, perhaps we should also for the development speed of Flink praise.

Flink is a distributed processing engine for streaming data and batch data. It is primarily implemented by Java code. At present, it relies on the contribution of the open source community to develop. For Flink, the main scenario it deals with is streaming data, and batch data is just one of the extreme exceptions to streaming data. In other words, Flink will treat all tasks as streams, which is the biggest feature. The Flink can support fast local iterations, as well as some loops of iterative tasks. And Flink can be customized memory management. In this case, if you want to compare Flink and Spark, Flink does not completely hand over the memory to the application layer. This is why Spark is more prone to OOM (out of memory) relative to Flink. In terms of the framework itself and the application scenario, Flink is more similar to Storm. If you've ever known Storm or Flume readers, it might be easier to understand Flink architecture and many concepts. Let's take a look at the architecture diagram of Flink first.

Figure 1. Flink Frame composition

As shown in 1, we can learn some of the most basic concepts of Flink, Client, JobManager, and TaskManager. The Client is used to submit a task to the Jobmanager,jobmanager distribution task to TaskManager to execute, and then taskmanager the Heartbeat report task status. Seeing here, some people should have the illusion of returning to the Hadoop generation. Indeed, from the architecture diagram to see, JobManager very much like the jobtracker,taskmanager of the year is also very similar to the tasktracker. One of the most important differences, however, is that there is a stream between TaskManager. Second, in Hadoop generation, only the Shuffle between Map and Reduce, and for Flink, there may be many levels, and there will be data transfer between TaskManager and TaskManager, unlike Hadoop, a fixed Map To Reduce.

Brief introduction of dispatching in Flink

In a Flink cluster, a compute resource is defined as a Task Slot. Each taskmanager will have one or more Slots. The JobManager dispatches the Task in Slot. But the Task here is different from what we understand in Hadoop. For Flink's JobManager, it dispatches a Pipeline Task, not a point. For example, in Hadoop, Map and Reduce are two tasks that are scheduled independently and will take up compute resources. For Flink, MapReduce is a Pipeline Task that occupies only one compute resource. In a similar case, if there is a MRR Pipeline task, it is also a Pipeline task that is dispatched collectively in Flink. In TaskManager, there are multiple Pipeline, depending on the number of slots they have.

This is easier to understand in the Flink StandAlone deployment model. Because the Flink itself also requires a simple management of computing resources (slots). When Flink is deployed on Yarn, Flink does not weaken resource management. In other words, the Flink is doing something Yarn should do. From the design point of view, I think this is not very reasonable. If Yarn's Container can not completely isolate the CPU resources, when the Flink TaskManager configuration of multiple slots, there should be an unfair utilization of resources. Flink if you want to share computing resources better with other computing frameworks in your data center, you should try not to interfere with the allocation and definition of computing resources.

Need deep learning Flink dispatch Reader, you can find flink-runtime this folder in Flink source directory, JobManager code is here.

The Ecological circle of Flink

A computational framework has to be developed in the long run and a complete Stack must be built. Otherwise, it doesn't make any sense. Only the upper layer has a specific application, and can well play the advantages of the computing framework itself, then the computational framework to attract more resources, will be faster progress. So Flink is also trying to build his own Stack.

Flink first supported the Scala and Java Api,python are also being tested. Flink supports diagram operations through gelly, as well as machine learning flinkml. Table is an interface to SQL support, which is API support, rather than textual SQL parsing and execution. For the complete Stack we can refer to.

Figure 2. Flink's Stack

Flink has also implemented many Connector sub-projects to support the wider ecosystem of big data. The most familiar, of course, is integration with Hadoop HDFS. Second, Flink also announced support for Tachyon, S3 and Maprfs. The support for Tachyon and S3, however, is achieved through Hadoop HDFS, which means Hadoop is required to use Tachyon and S3, and changes to the configuration of Hadoop (Core-site.xml). If you browse Flink's code directory, we'll see more Connector items, such as Flume and Kafka.

Deployment of Flink

Flink has three deployment modes, namely Local, Standalone Cluster, and Yarn Cluster. For the Local mode, JobManager and TaskManager will have a common JVM to complete the Workload. Local mode is most convenient if you want to validate a simple application. Standalone or Yarn Cluster are mostly used in practical applications. Below I mainly introduce the following two modes.

Standalone mode

Before we build the Flink cluster for Standalone mode, we need to download the Flink installation package first. Here we need to download Flink for Hadoop 1.x packages. After downloading and extracting, go to the root directory of Flink, and then view the Conf folder, such as.

Figure 3. Directory Structure of the Flink

We need to specify Master and Worker. The Master machine will start Jobmanager,worker and the TaskManager will start. Therefore, we need to modify the master and slaves in the Conf directory. When configuring the master file, you need to specify the UI listening port for JobManager. In general, JobManager only need to configure one, and the Worker must configure one or more (in the behavior unit). Examples are as follows:

12345 micledeMacBook-Pro:conf micle$ cat masters localhost:8081micledeMacBook-Pro:conf micle$ cat slaves localhost

Locate the file Flink-conf.yaml in the Conf directory. In this file, the basic properties of the Flink modules are defined, such as the RPC ports, the size of the JobManager and the TaskManager heap, and so on. In cases where HA is not considered, it is generally only necessary to modify the attribute taskmanager.numberoftaskslots, which is the number of slots owned by each Task Manager. This property, typically set to the core number of the machine CPU, is used to balance the computational performance between machines. Its default value is 1. After the configuration is complete, use the commands in to start JobManager and TaskManager (before starting, you need to confirm that the Java environment is ready).

Figure 4. Flink to start StandAlone mode

After launch we can login to the GUI page of Flink. In the page we can see the basic properties of the Flink cluster, in the JobManager and TaskManager pages, you can see the properties of these two modules. Currently, the Flink GUI provides simple viewing capabilities and cannot dynamically modify configuration properties. Typically, this is difficult to accept in enterprise-class applications. Therefore, if an enterprise really wants to apply Flink, it is estimated that it will have to strengthen the function of the WEB.

Figure 5. GUI page of Flink

Yarn Cluster Mode

In an enterprise, in order to maximize the use of cluster resources, it is common to run multiple types of Workload in one cluster at the same time. Therefore Flink also supports running on Yarn. First, let's understand the relationship between Yarn and Flink.

Figure 6. The relationship between Flink and Yarn

As can be seen in the figure, the relationship between Flink and yarn is the same as the relationship between MapReduce and yarn. The Flink implements its own App Master via the Yarn interface. When Flink,yarn is deployed in Yarn, it uses its own Container to launch Flink JobManager (i.e. App Master) and TaskManager.

Knowing the relationship between Flink and Yarn, let's take a simple look at the steps of deployment. Before this, I need to deploy Yarn cluster first, here I do not introduce. We can view the existing application in yarn with the following command and check the status of yarn.

1 yarn application –list

If the command returns correctly, it indicates that Yarn's RM is currently in the boot state. For different Yarn versions, the Flink has different installation packages. We can find the corresponding installation package in the download page of Apache Flink. My Yarn version is 2.7.1. Before we introduce the specific steps, we need to understand that Flink has two modes of operation on Yarn. One is to allow Yarn to start JobManager and TaskManager directly, and the other is to start the Flink module when running Flink Workload. The former is equivalent to let the Flink module in the Standby state. Here, I also mainly introduce the former.

After you download and decompress the Flink installation package, you need to add the environment variable Hadoop_conf_dir or Yarn_conf_dir to the configuration directory of YARN in the environment. If you run the following command:

1 export HADOOP_CONF_DIR=/etc/hadoop/conf

This is because Flink implements yarn's Client, and therefore requires some configuration and Jar packages from yarn. After configuring the environment variables, Yarn will start Flink JobManager and TaskManager by simply running the following script.

1 yarn-session.sh –d –s 2 –tm 800 –n 2

The above command means that you apply 2 Container to Yarn to start TaskManager (-N 2), each taskmanager has two Task slots (-S 2), and Container to each TaskManager Please 800M of memory. After the above command is successful, we can see the Flink record on the Yarn application page. Such as.

Figure 7. Flink on Yarn

If some readers are testing in a virtual machine, they may encounter errors. It is important to note that the size of the memory, Flink to yarn will apply for multiple Container, but yarn configuration may limit the amount of memory Container can request, even the yarn itself manages the memory is very small. This is likely to fail to start the TaskManager normally, especially if multiple taskmanager are specified. Therefore, after starting Flink, you need to check the status of Flink in the Flink page. Here you can jump directly from the RM page (click Tracking UI). At this time Flink page 8.

Figure 8. Flink's page

For Flink installation trouble-shooting, it may be more time to look at Yarn-related logs for analysis. There's not much to do here, so the reader can find it in Yarn-related descriptions.

Flink of HA

For an enterprise-class application, stability is the primary consideration and then performance, so the HA mechanism is essential. Also, readers who already understand the Flink architecture may be more worried about the single point of the Flink architecture. As with the Hadoop generation, it is clear from the architecture that JobManager has obvious single points of problem (Spof,single point of failure). JobManager shoulder the task scheduling and resource allocation, once the JobManager accident, the consequences imaginable. Flink's approach to JobManager HA is essentially the same as Hadoop (generation and second generation).

First, we need to know that Flink has two modes of deployment, namely Standalone and Yarn Cluster mode. For Standalone, Flink must rely on Zookeeper to implement JobManager ha (Zookeeper has become an essential module for most open source framework ha). With the help of Zookeeper, a Standalone Flink cluster will have multiple live jobmanager at the same time, only one of which is in the working state and the other in Standby state. When a working JobManager loses connectivity (such as downtime or Crash), Zookeeper will elect a new JobManager from Standby to take over the Flink cluster.

For yarn cluaster mode, Flink will rely on yarn itself to do HA for JobManager. In fact, this is a purely Yarn mechanism. For yarn Cluster mode, both JobManager and TaskManager are yarn-initiated in yarn Container. At this time, JobManager, in fact, should be called Flink application Master. It is also said that its failure to recover, it is entirely dependent on Yarn in the ResourceManager (and MapReduce appmaster the same). Due to the full reliance on yarn, different versions of yarn may have subtle differences. It's no longer a dig.

Flink's Rest API introduction

Flink, like most other open-source frameworks, provides a lot of useful Rest APIs. But Flink's Restapi, which is not very powerful, can only support some of the functions of Monitor. The Flink Dashboard itself is also querying the result data of each item through its Rest. On the basis of Flink Restapi, it is easy to integrate the Monitor function of Flink with other third-party tools, which is the original intention of the design.

In the Flink process, the service is provided by JobManager to provide the Rest API. So before you call Rest, determine if the jobmanager is in a normal state. Normally, after sending a Rest request to JobManager, the Client receives a return result in JSON format. Due to the fact that Rest does not provide much functionality at this time, readers who need to enhance this feature can find the corresponding code in subproject Flink-runtime-web. One of the most critical class webruntimemonitor is used to divert all Rest requests, and if a new type of request needs to be added, the corresponding processing code is added here. Let me cite a few common Rest APIs.

1. Query the basic information of the Flink cluster:/overview. The example command-line format and return results are as follows:

12 $ curl http://localhost:8081/overview{"taskmanagers":1,"slots-total":16,"slots-available":16,"jobs-running":0,"jobs-finished":0,"jobs-cancelled":0,"jobs-failed":0}

2. Query Job information in the current Flink cluster:/jobs. The example command-line format and return results are as follows:

12 $ curl http://localhost:8081/jobs{"jobs-running":[],"jobs-finished":["f91d4dd4fdf99313d849c9c4d29f8977"],"jobs-cancelled":[],"jobs-failed":[]}

3. Query for a specified JOB information:/jobs/jobid. The results of this query will return very much more detailed content, which I have tested in the browser, such as:

Figure 9. Rest queries for specific JOB information

Readers who want to learn more about Rest requests can go to the Apache Flink page to find them. As space is limited, it is not listed here.

Running the Flink Workload

The WordCount example is like the HelloWorld of the computational framework. Here I take WordCount as an example, how to run the workload in Flink.

In the environment where Flink is installed, locate the Flink directory. Then find Bin/flink, which is the tool used to submit flink workload. For WordCount, we can use the existing sample jar package directly. If you run the following command:

1 ./bin/flink run ./examples/WordCount.jar hdfs://user/root/test hdfs://user/root/out

The above command is to run the WordCount in HDFs, and if there is no HDFs with the local file system is also possible, only need to replace "hdfs://" with "file://". Here we need to emphasize a kind of deployment relationship, that is, the Flink of StandAlone mode, also can directly access the Distributed file system such as HDFS.

Conclusion

Flink is a project that starts late than Spark, but does not mean that Flink's future will be bleak. There are many similarities between Flink and Spark, but there are a lot of obvious differences. This article does not compare the difference between the two, this is the future I would like to discuss with you. For example Flink How to manage memory more efficiently, how to further avoid the user program of OOM. In the Flink world, everything is a stream, and it is more focused on processing streaming applications. Because of its late start, and the fact that the community is not as hot as spark, it is not as perfect as spark in some detail scenario support. For example, the current support for SQL is not as smooth as Spark. In enterprise applications, Spark has begun to land, and Flink may take some time to polish. In subsequent articles, I'll detail how to develop Flink programs, and more about Flink internal implementations.

Related Topics
    • Apache Flink
    • Apache Tachyon
    • Apache Hadoop
    • DeveloperWorks Open Source Technology topic: Find rich operational information, tools, and project updates to help you master open source technologies and use them with IBM products.

New generation Big Data processing engine Apache Flink

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.