The Spark action triggers the runjob of the Sparkcontext class, and Runjob continues to invoke the Dagschduler class Runjob
The Runjob method of the Dagschduler class calls the Submitjob method and determines whether the job is complete based on the value returned by Completionfulture.
The onreceive is used to dagscheduler a recurring processing event where Submitjob () generates jobsubmitted events that trigger the Handlejobsubmitted method.
Normally, a activejob is created based on the finalstage. The finalstage is generated by the finalrdd of the Spark action, and the stage confirms that all the dependent stages are executed before it can be executed. Which is judged by the Getmessingparentstages method.
This method uses a stack to implement the recursive segmentation stage, and then returns a wide-dependent hashset, which is called if a wide dependency type
Then submit the stage and execute each stage according to Missingstage. Dividing the Dag end
Submitstage will execute the stage in the Dag in turn, and if the parent stage executes the parent stage first, commit the stage and join the watingstages.
Example:
Scala> Sc.makerdd (Seq ()). Count
16/10/28 17:54:59 [INFO] [org.apache.spark.sparkcontext:59]-Starting job:count at <console>:13
16/10/28 17:54:59 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Got Job 0 (count at <console>:13) Tput Partitions (allowlocal=false)
16/10/28 17:54:59 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Final stage:stage 0 (count at <console>:13 )
16/10/28 17:54:59 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Parents of final stage:list ()
16/10/28 17:54:59 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Missing parents:list ()
16/10/28 17:54:59 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Submitting Stage 0 (Parallelcollectionrdd[0] at Makerdd at <console>:13), which have no missing parents
Collect relies on Reducebykey,reducebykey for map, so it submits map (Stage 1 (mappedrdd[2) at map at <console>:13)
Scala> Sc.makerdd (SEQ). map (L = = (l,1)). Reducebykey ((V1,V2) = v1+v2). Collect
16/10/28 18:00:07 [INFO] [org.apache.spark.sparkcontext:59]-Starting job:collect at <console>:13
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Registering RDD 2 (map at <console>:13)
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Got Job 1 (collect at <console>:13) with 22 Output partitions (allowlocal=false)
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Final stage:stage 2 (collect at <console> : 13)
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Parents of final stage:list (stage 1)
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Missing parents:list (Stage 1)
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Submitting Stage 1 (mappedrdd[2] at map at <con SOLE>:13), which has no missing parents
Spark Dagsheduler Generation Stage Process Analysis experiment