6th lesson: Spark Streaming Source interpretation of job dynamic generation and deep thinking

Source: Internet
Author: User
Tags new set

In the previous section, we explained the operational mechanism of the spark streaming job in general. In this section we elaborate on how the job is generated, see:

650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M01/80/0C/wKiom1c1bjDw-ZyRAAE2Njc7QYE577.png "title=" Untitled. png "alt=" Wkiom1c1bjdw-zyraae2njc7qye577.png "/>

In spark streaming, the specific class responsible for dynamic job scheduling is Jobscheduler:

/** * This class schedules jobs to being run on Spark.  It uses the Jobgenerator to generate * The jobs and runs them using a thread pool. * This class schedules jobs to run on Spark, which uses jobgenerator to generate jobs, and uses the line pool to run the jobs */private[streaming]class Jobscheduler (Val SSC: StreamingContext) extends Logging


Jobscheduler has two very important members:

    • Jobgenerator

    • Receivertracker

Jobscheduler delegates the specific build work of each batch's Rdd dag to Jobgenerator, delegating the record work input from the source data to Receivertracker.


In Jobgenerator, there are two vital members, Recurringtimer and Eventloop;recurringtimer, which control the job trigger. Every batchinterval time, a message is placed into the EventLoop queue. EventLoop is constantly viewing the message queue, once the message is processed;


In the spark streaming application, you will call the

Ssc.start ()//SSC representative StreamingContext

This will implicitly lead to the start of a series of modules:

Ssc.start ()

-Scheduler.start ()

-Jobgenerator.start ()


Let's take a concrete look at the code for Jobgenerator.start ():

def start (): Unit = synchronized {... eventloop.start ()//Start RPC processing thread if (ssc.ischeckpointpresent) {restart () If it is not the first start, recover from checkpoint} else {startfirsttime ()//First Boot}}

Dstreamgraph, timer start in Startfirsttime

Private Def startfirsttime () {val startTime = new Time (Timer.getstarttime ()) Graph.start (starttime-graph.batchduratio N) Timer.start (starttime.milliseconds) loginfo ("Started jobgenerator at" + startTime)}


After the timer Recurringtimer is started, a message will occur to the EventLoop when the thread is used for each new batchinterval.

Private Def triggeractionfornextinterval (): Unit = {clock.waittilltime (nexttime) callback (nexttime) Prevtime = Nexttim E Nexttime + = Period logdebug ("Callback for" + name + "called at Time" + prevtime)}


The callback function here is the anonymous function that was passed in when the Recurringtimer was initialized:

Private Val timer = new Recurringtimer (Clock, ssc.graph.batchDuration.milliseconds, longtime = Eventloop.post ( Generatejobs (New Time (longtime))), "Jobgenerator")


When EventLoop receives the message:

Override def run (): unit = {  try {    while  (!stopped.get)  {      val event = eventqueue.take ()        try {        onreceive (event)        } catch {        case  Nonfatal (e)  => {          try {             onerror (e)            } catch {             case nonfatal (E)  => logerror ("unexpected error in "  +  Name, e)           }         }  &nBsp;   }    }  } catch {    case  ie: interruptedexception => // exit even if eventqueue is  not empty    case nonfatal (E)  => logerror ("Unexpected  error in  " + name, e)   }}

Constant handling of events:

/** Processes All Events */private def processevent (event:jobgeneratorevent) {logdebug ("Got event" + Event) event Mat CH {case Generatejobs (time) = Generatejobs (time) case Clearmetadata (time) = Clearmetadata (time) Case DoC Heckpoint (time, clearcheckpointdatalater) = Docheckpoint (time, clearcheckpointdatalater) case Clearcheckpointd ATA (time) = Clearcheckpointdata (Time)}}

The Generatejobs method is called here:

Private def generatejobs (time: time)  {  // Set the SparkEnv  In this thread, so that job generation code can access the  environment  // Example: BlockRDDs are created in this  Thread, and it needs to access blockmanager  // update: this  is probably redundant after threadlocal stuff in sparkenv has  been removed.  sparkenv.set (ssc.env)   Try {     JobScheduler.receiverTracker.allocateBlocksToBatch (Time)  // allocate received blocks  To batch    graph.generatejobs (Time)  // generate jobs using  Allocated block  } match {    case success (Jobs)  =>     &nbSp; val streamidtoinputinfos = jobscheduler.inputinfotracker.getinfo (Time)        jobscheduler.submitjobset (Jobset (Time, jobs, streamidtoinputinfos))      case failure (e)  =>      jobscheduler.reporterror ( "error generating jobs for time "  + time, e)   }   Eventloop.post (Docheckpoint (time, clearcheckpointdatalater = false))}

This code is exceptionally lean, Contains 4 steps for the main work of Jobgenerator

    1. Ask Receivertracker to allocate the data that has been received at once, slicing the last batch into the new batch

    2. Requires Dstreamgraph to replicate an instance of a new set of RDD Dags. The return value of the entire dstreamgraph.generatejobs (time) traversal end is seq[job]

    3. The meta information obtained from the RDD DAG, and 1th step of this batch generated by the 2nd step, is submitted to jobscheduler asynchronous execution here we are submitting the M for (a) time (b) seq[job] (c) block data ETA information. The three are packaged as a jobset and then called Jobscheduler.submitjobset (Jobset) to submit to Jobscheduler. The process of submitting the Jobscheduler to the Jobscheduler and the subsequent execution of the procedure in Jobexecutor are asynchronous, so this step will be returned very quickly.

    4. As soon as the commit is completed (whether or not it has started asynchronously), the current running state of the entire system is immediately made a checkpoint here do checkpoint also just asynchronously submits a docheckpoint message request without waiting checkpoint You can go back to real writing. It also briefly describes what checkpoint contains, including actual run-time information such as Jobset, which has already been submitted but has not yet run its end.



Note:

1. DT Big Data Dream Factory public number Dt_spark
2, the IMF 8 o'clock in the evening big data real combat YY Live channel number: 68917580
3, Sina Weibo: Http://www.weibo.com/ilovepains


This article is from the "Ding Dong" blog, please be sure to keep this source http://lqding.blog.51cto.com/9123978/1772958

6th lesson: Spark Streaming Source interpretation of job dynamic generation and deep thinking

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.