SPARK macro Architecture & execution steps

Source: Internet
Author: User
Tags terminates apache mesos

Spark uses a master-slave architecture with a central coordinator and many distributed workers.

The center coordinator is called driver. Driver and a large number of distributed worker communications called Executor

Driver runs in its own Java process, and each executor is a separate Java process. Driver

Together with all of its executor is called the spark application.

The Spark app runs on a set of machines that use external services called Cluster Manager. Note that the Spark

Packaged with a built-in cluster manager called the Standalong Cluster Manager. Spark can also work
Two open source cluster managers for Hadoop YARN and Apache Mesos.

    • Driver

Driver is the process in which your program's main () method resides. The process runs user code creation

Sparkcontext, create an RDD, perform transformations and actions. When you run a spark Shell, you create a
A driver program was built. Once the driver terminates, the entire application is over.
When driver is running, it has two responsibilities:

    • Convert User program to task

Spark's driver has the responsibility to convert the user program to a physical execution unit called a task. Looking from the top,

All Spark programs Follow the same structure: they create an rdd from the input, and the laxative transforms from these Rdd

Get a new Rdd and then perform actions to collect data or save data. Spark

The program implicitly creates a logically rational directed acyclic graph (DAG) of the operation. When driver is running, it

Convert the diagram to a physical execution plan.

Spark performs a variety of optimizations, such as the "pipelining" Mapping transformation Merge, and transforms the execution diagram into a set of

Stage Each stage is made up of a set of tasks. The Task is bundled together ready to be sent to the set

Group. The Task is the smallest unit in spark processing. A typical user program executes hundreds or thousands of single

The task alone.

    • Dispatch task to Executor

With a physical execution plan, driver must coordinate individual tasks into executor. When the Excutor start

They will register themselves with driver, so driver can see the full executor at any time.

View. Each executor is represented as a process capable of performing tasks and saving RDD data.
Spark Driver will look for the current executor group and then try to schedule each task based on the data distribution
to the right place. When a task executes, it may have side effects on the data being cached. Driver Also
You want to record the location of the cached data and use it to schedule future tasks to access the data.
Driver exposes the running information of these spark applications from the Web interface, the default port is 4040.
For example, in local mode, the available UI is http://localhost:4040.

    • Executors

Spark Executor is a worker process whose responsibility is to run a single task in a given spark job.

Executor is initiated at the start of the spark application and is typically used throughout the lifecycle of the application

Run. Even though executor is wrong, spark can continue. Executor has two tasks. One is the transport
Rows make up the task of the application and return the result to driver. The second one is through every executor.
A service called the Block Manager provides memory for the RDD cached in the user program
Storage. Because the RDD is cached directly in the Execturo, the task can run with the data.

    • The exact steps of the Spark application when the cluster is running

1. The user submits an application with Spark-submit.
2. Spark-submit starts the driver program and invokes the user-specified main () method.
3. The driver program contacts the cluster Manager to request resources to start each executor.
4. Cluster Manager launches each executor on behalf of the driver program.
5. The Driver process runs the entire user application. RDD-based transformations and actions in the program, driver programs
Sent to each executor in the form of a task.
6. The Task runs in the executor process to calculate and save the results.
7. If driver's main () method exits or calls Sparkcontext.stop (), it terminates
Executor runs and frees resources that are allocated from the cluster manager.

SPARK macro Architecture & execution steps

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.