Spark kernel secret -04-spark task scheduling system personal understanding

Source: Internet
Author: User
Tags shuffle

The task scheduling system for Spark is as follows:

From the Chinese Academy of Sciences to see the cause rddobject generated DAG, and then entered the Dagscheduler stage, Dagscheduler is the state-oriented high-level scheduler, Dagscheduler the DAG split into a lot of tasks, Each group of tasks is a state, whenever encountering shuffle will produce a new state, you can see a total of three state;dagscheduler need to record those rdd is deposited into the disk and other materialized actions, at the same time need to be the most optimal dispatch of the task of the Ribbon, such as data locality, etc. dagscheduler also monitor for failures caused by the shuffle output, and if this failure occurs, it may be necessary to resubmit the state:


Dagscheduler Division state after the Taskset unit to the task, the task to the bottom level of the pluggable scheduler TaskScheduler to deal with:


As can be seen TaskScheduler is a trait, in the current spark system TaskScheduler implementation class has only one taskschedulerimpl:


A taskscheduler is only for one Sparkcontext instance service, TaskScheduler accepts the task of grouping sent from Dagscheduler. Dagscheduler send the task to TaskScheduler is submitted on the stage, TaskScheduler received the task is responsible for distributing the task to the worker's executor in the cluster to run, if a task fails to run, TaskScheduler is responsible for retrying, and if TaskScheduler discovers that a task has not been run, it is possible to start the same task by running the same tasks, which is the result of running the task first.


The tasks sent by TaskScheduler are given to executor on the worker in a multi-threaded manner, with each thread responsible for a task:





The management of the storage system is blockmanager to be responsible for:


Look at the source code of Taskset:


From the first parameter of the Taskset source task you can see that it is an array of tasks, containing a set of tasks.


Spark kernel secret -04-spark task scheduling system personal understanding

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.