Contents of this issue:
1 Job Dynamic generation
2 Deep thinking
All data that cannot be streamed in real time is invalid data. In the stream processing era, Sparkstreaming has a strong appeal, and development prospects, coupled with Spark's ecosystem, streaming can easily call other powerful frameworks such as Sql,mllib, it will eminence.
The spark streaming runtime is not so much a streaming framework on spark core as one of the most complex applications on spark core. If you can master the complex application of spark streaming, then other complex applications are a cinch. It is also a general trend to choose spark streaming as a starting point for custom versions.
In spark streaming, the specific class responsible for dynamic job scheduling is Jobscheduler
There are two important members in Jobscheduler: Jobgenerator and Receivertracker, while in jobgenerator there are two vital members Recurringtimer and EventLoop.
The Startfirsttime method is called when Jobgenerator is started, although it is restart if it is not the first time it is started.
if (ssc.ischeckpointpresent) { restart () } else { startfirsttime () }
In this method: Dstream and timers are started
Private Def startfirsttime () { val startTime = new Time (Timer.getstarttime ()) Graph.start (StartTime- graph.batchduration) Timer.start (starttime.milliseconds)
The timer is responsible for batchinterval each time interval, sending a message in the EventLoop loop. After the EventLoop receives the message, it starts the Run method, which is always executed in the message queue.
Override Def run (): Unit = { try {while (!stopped.get) { val event = eventqueue.take () try { Onreceiv E (event) } catch {case nonfatal (e) + = { try { onError (e) } catch {case nonfatal (e) = Lo Gerror ("Unexpected error in" + Name, E)}}}} ' Catch {case ie:interruptedexception =& Gt Exit even if EventQueue is isn't empty case nonfatal (e) = LogError ("Unexpected error in" + Name, E)
At this point the Generatejobs method is executed in the message to generate the job continuously, and the job completes the build process.
Private def generatejobs (time:time) { //Set The sparkenv in this thread, so that job generation code can access the Environment //Example:blockrdds is created in this thread, and it needs to access Blockmanager //Update:this is probably redundant after threadlocal stuff in sparkenv have been removed. Sparkenv.set (ssc.env) Try { jobScheduler.receiverTracker.allocateBlocksToBatch (time)//Allocate received Blocks to Batch graph.generatejobs (time)//generate jobs using allocated block } match {case Success (jobs) = > val streamidtoinputinfos = jobScheduler.inputInfoTracker.getInfo (time) Jobscheduler.submitjobset ( Jobset (Time, Jobs, Streamidtoinputinfos)) Case Failure (e) = Jobscheduler.reporterror ("Error Generating jobs for Time "+ Time, E) }
The following 4 steps are mainly included in the job generation process
Ask Receivertracker to allocate the data that has been received at once, slicing the last batch into the new batch
Requires Dstreamgraph to replicate an instance of a new set of RDD Dags. The return value of the entire dstreamgraph.generatejobs (time) traversal end is seq[job]
The meta information obtained from the RDD DAG, and 1th step of this batch generated by the 2nd step, is submitted to jobscheduler asynchronous execution here we are submitting meta information that will (a) time (b) seq[job] (c) block data. The three are packaged as a jobset and then called Jobscheduler.submitjobset (Jobset) to submit to Jobscheduler. The process of submitting the Jobscheduler to the Jobscheduler and the subsequent execution of the procedure in Jobexecutor are asynchronous, so this step will be returned very quickly.
As soon as the commit is completed (whether or not it has started asynchronously), the current running state of the entire system is immediately made a checkpoint here do checkpoint also just asynchronously submits a docheckpoint message request without waiting checkpoint You can go back to real writing. It also briefly describes what checkpoint contains, including actual run-time information such as Jobset, which has already been submitted but has not yet run its end.
Note:
Data from: Dt_ Big Data Dream Factory (spark release version customization)
For more private content, please follow the public number: Dt_spark
If you are interested in big data spark, you can listen to it free of charge by Liaoliang teacher every night at 20:00 Spark Permanent free public class, address yy room Number: 68917580
Spark Version Custom 6th day: Job dynamic generation and deep thinking