Contents of this issue:
1 Receiver Life cycle
2 Deep thinking
All data that cannot be streamed in real time is invalid data. In the stream processing era, Sparkstreaming has a strong appeal, and development prospects, coupled with Spark's ecosystem, streaming can easily call other powerful frameworks such as Sql,mllib, it will eminence.
The spark streaming runtime is not so much a streaming framework on spark core as one of the most complex applications on spark core. If you can master the complex application of spark streaming, then other complex applications are a cinch. It is also a general trend to choose spark streaming as a starting point for custom versions.
In Sparkstreaming, reiceiver are used to accept data. Receiver is managed by Receivertracker on the node. The Start method is called in Receivertracker to initiate the message loop body Receivertrackerendpoint to communicate.
def start (): Unit = synchronized { if (istrackerstarted) { throw new Sparkexception ("Receivertracker already star Ted ") } if (!receiverinputstreams.isempty) { endpoint = Ssc.env.rpcEnv.setupEndpoint ( " Receivertracker ", New Receivertrackerendpoint (SSC.ENV.RPCENV)) if (!skipreceiverlaunch) launchreceivers () Loginfo ("Receivertracker started") Trackerstate = Started
The Start method sends a message to start receiver
Private Def launchreceivers (): Unit = { val receivers = Receiverinputstreams.map (NIS = { val rcvr = Nis.getr Eceiver () Rcvr.setreceiverid (nis.id) RCVR }) rundummysparkjob () loginfo ("starting" + Receivers.length + "receivers")
The next step is to startallreceivers, here through the message loop body receivertrackerendpoint.
Case Startallreceivers (receivers) = val scheduledlocations = schedulingpolicy.schedulereceivers (Receivers, getexecutors) for (receiver <-receivers) { val executors = scheduledlocations (receiver.streamid) Updatereceiverscheduledexecutors (Receiver.streamid, executors) receiverpreferredlocations (receiver.streamId ) = Receiver.preferredlocation startreceiver (receiver, executors)
In this method, the Receivertracker has not stopped if the job that corresponds to receiver has ended. It will send a Restartreceiver method to Receivertrackerendpoint to restart Reveiver, which is encapsulated in Future.oncomplete
Future.oncomplete {case Success (_) = if (!shouldstartreceiver) { onreceiverjobfinish (Receiverid) } else { loginfo (S "Restarting receiver $receiverId") self.send (Restartreceiver (receiver)) } Case Failure (e) = if (!shouldstartreceiver) { onreceiverjobfinish (receiverid) } else { LogError ("Receiver has been stopped. Try to restart it. ", E) Loginfo (S" Restarting receiver $receiverId ") self.send (Restartreceiver (receiver))
Note:
Data from: Dt_ Big Data Dream Factory (spark release version customization)
For more private content, please follow the public number: Dt_spark
If you are interested in big data spark, you can listen to it free of charge by Liaoliang teacher every night at 20:00 Spark Permanent free public class, address yy room Number: 68917580
Spark Version Custom 9th day: Receiver in the driver of the exquisite realization of the full life cycle of thorough research and thinking