The task scheduling system for Spark is as follows:
From the Chinese Academy of Sciences to see the cause rddobject generated DAG, and then entered the Dagscheduler stage, Dagscheduler is the state-oriented high-level scheduler, Dagscheduler the DAG split into a lot of tasks, Each group of tasks is a state, whenever encountering shuffle will produce a new state, you can see a total of three state;dagscheduler need to record those rdd is deposited into the disk and other materialized actions, at the same time need to be the most optimal dispatch of the task of the Ribbon, such as data locality, etc. dagscheduler also monitor for failures caused by the shuffle output, and if this failure occurs, it may be necessary to resubmit the state:
Dagscheduler Division state after the Taskset unit to the task, the task to the bottom level of the pluggable scheduler TaskScheduler to deal with:
As can be seen TaskScheduler is a trait, in the current spark system TaskScheduler implementation class has only one taskschedulerimpl:
A taskscheduler is only for one Sparkcontext instance service, TaskScheduler accepts the task of grouping sent from Dagscheduler. Dagscheduler send the task to TaskScheduler is submitted on the stage, TaskScheduler received the task is responsible for distributing the task to the worker's executor in the cluster to run, if a task fails to run, TaskScheduler is responsible for retrying, and if TaskScheduler discovers that a task has not been run, it is possible to start the same task by running the same tasks, which is the result of running the task first.
The tasks sent by TaskScheduler are given to executor on the worker in a multi-threaded manner, with each thread responsible for a task:
The management of the storage system is blockmanager to be responsible for:
Look at the source code of Taskset:
From the first parameter of the Taskset source task you can see that it is an array of tasks, containing a set of tasks.
Spark kernel secret -04-spark task scheduling system personal understanding