Contents of this issue:
1 Jobscheduler Insider Realization
2 Deep thinking
All data that cannot be streamed in real time is invalid data. In the stream processing era, Sparkstreaming has a strong appeal, and development prospects, coupled with Spark's ecosystem, streaming can easily call other powerful frameworks such as Sql,mllib, it will eminence.
The spark streaming runtime is not so much a streaming framework on spark core as one of the most complex applications on spark core. If you can master the complex application of spark streaming, then other complex applications are a cinch. It is also a general trend to choose spark streaming as a starting point for custom versions.
In the job generation method, Jobgenerator will be dynamically generated jobset every batchinterval time submitted to Jobscheduler
Private def generatejobs (time:time) { //Set The sparkenv in this thread, so that job generation code can access the Environment //Example:blockrdds is created in this thread, and it needs to access Blockmanager //Update:this is probably redundant after threadlocal stuff in sparkenv have been removed. Sparkenv.set (ssc.env) Try { jobScheduler.receiverTracker.allocateBlocksToBatch (time)//Allocate received Blocks to Batch graph.generatejobs (time)//generate jobs using allocated block } match {case Success (jobs) = > val streamidtoinputinfos = jobScheduler.inputInfoTracker.getInfo (time) Jobscheduler.submitjobset ( Jobset (Time, Jobs, Streamidtoinputinfos))//Submit Jobset Case Failure (e) = Jobscheduler.reporterror (" Error Generating jobs for Time "+ Time, E) } eventloop.post (Docheckpoint (time, clearcheckpointdatalater = Fals e))}
In the Submitjobset method, a jobhandler is generated for each job and is given to Jobexecutor to run.
def submitjobset (jobset:jobset) { if (jobSet.jobs.isEmpty) { loginfo ("No jobs added for time" + jobset.time)
Private Val jobexecutor = Threadutils.newdaemonfixedthreadpool (numconcurrentjobs, "Streaming-job-executor")
In the generated thread pool, the generated jobhandler are used to handle the event. In this case Jobhandler will call Job.run (), it will trigger the real execution of Job.func! The job will start running here.
Note:
Data from: Dt_ Big Data Dream Factory (spark release version customization)
For more private content, please follow the public number: Dt_spark
If you are interested in big data spark, you can listen to it free of charge by Liaoliang teacher every night at 20:00 Spark Permanent free public class, address yy room Number: 68917580
Spark Version Custom 7th day: Jobscheduler Insider realization and deep thinking