Spark Dagsheduler Generation Stage Process Analysis experiment

Source: Internet
Author: User

The Spark action triggers the runjob of the Sparkcontext class, and Runjob continues to invoke the Dagschduler class Runjob

The Runjob method of the Dagschduler class calls the Submitjob method and determines whether the job is complete based on the value returned by Completionfulture.

The onreceive is used to dagscheduler a recurring processing event where Submitjob () generates jobsubmitted events that trigger the Handlejobsubmitted method.

Normally, a activejob is created based on the finalstage. The finalstage is generated by the finalrdd of the Spark action, and the stage confirms that all the dependent stages are executed before it can be executed. Which is judged by the Getmessingparentstages method.

This method uses a stack to implement the recursive segmentation stage, and then returns a wide-dependent hashset, which is called if a wide dependency type

Then submit the stage and execute each stage according to Missingstage. Dividing the Dag end

Submitstage will execute the stage in the Dag in turn, and if the parent stage executes the parent stage first, commit the stage and join the watingstages.


Scala> Sc.makerdd (Seq ()). Count

16/10/28 17:54:59 [INFO] [org.apache.spark.sparkcontext:59]-Starting job:count at <console>:13

16/10/28 17:54:59 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Got Job 0 (count at <console>:13) Tput Partitions (allowlocal=false)

16/10/28 17:54:59 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Final stage:stage 0 (count at <console>:13 )

16/10/28 17:54:59 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Parents of final stage:list ()

16/10/28 17:54:59 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Missing parents:list ()

16/10/28 17:54:59 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Submitting Stage 0 (Parallelcollectionrdd[0] at Makerdd at <console>:13), which have no missing parents

Collect relies on Reducebykey,reducebykey for map, so it submits map (Stage 1 (mappedrdd[2) at map at <console>:13)

Scala> Sc.makerdd (SEQ). map (L = = (l,1)). Reducebykey ((V1,V2) = v1+v2). Collect
16/10/28 18:00:07 [INFO] [org.apache.spark.sparkcontext:59]-Starting job:collect at <console>:13
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Registering RDD 2 (map at <console>:13)
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Got Job 1 (collect at <console>:13) with 22 Output partitions (allowlocal=false)
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Final stage:stage 2 (collect at <console> : 13)
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Parents of final stage:list (stage 1)
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Missing parents:list (Stage 1)
16/10/28 18:00:07 [INFO] [org.apache.spark.scheduler.dagscheduler:59]-Submitting Stage 1 (mappedrdd[2] at map at <con SOLE&GT;:13), which has no missing parents

Spark Dagsheduler Generation Stage Process Analysis experiment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.