1) Spark usually defines the shuffle operation as dividing the stage boundary, in fact there are two types of stages: Shufflemaptask and Resulttask. Resulttask is the output, the output is called Resulttask, are caused by the stage division, such as the following code:
Rdd.parallize (1). foreach (println)
Within each stage, there must be a shufflemaptask or a resulttask, because they are the basis for dividing the stage and the boundary between the stage. All the tasks in a stage are finally submitted to TaskScheduler in Taskset form, andSpark implements three different TaskScheduler
, including LocalSheduler
, ClusterScheduler
and MesosScheduler
.
2) Actions generate a job that triggers a job submission, so a job that we submit from the client may be divided into multiple jobs. However, if there is no other action after an action, that is, the action is the last action, the action is independent of a stage, not a job. (Ref. 0)
3) The task is divided into Shufflemaptask and Resulttask (refer to 1).
Spark Concept Grooming