Contents of this issue:
1. A thorough study of the relationship between Dstream and Rdd
2. Thorough research on the streaming of Rdd
athorough study of the relationship between Dstream and Rdd
Pre-Class thinking:
How is the RDD generated?
What does the rdd rely on to generate? According to Dstream.
What is the basis of the RDD generation?
is the execution of the RDD in spark streaming different from the Rdd execution in Spark core?
What do we do with the RDD after it's run?
Foreachdstream does not necessarily trigger the execution of the job, but it will certainly trigger job creation, and the job execution is not related;
For the Dstream class, this is explained in the source code.
* Dstreams internally is characterized by a few basic properties: *-A List of other dstreams, the DStream depends on *-A time interval at which, the DStream generates an RDD *-A function that's used to generate an RDD after each time interval |
The General meaning is:
1.DStream relies on other dstream, except for the first Dstream, because the first dstream is generated based on the data source and is used to receive data, so there is no other dependency;
2. How do I generate an rdd based on Dstream? Generate an rdd every batchduration,dstream;
3. An RDD is generated for every batchduration,dstream internal function;
Q: How did dstream produce the RDD?
A thorough study on the generation of RDD in streaming
Generatedrdds is a member of the Dstream, stating that each instance of the Dstream has this member, but the substance is at run time to grasp the handle of the last dstream.
Special thanks to Liaoliang Teacher's unique explanation:
Liaoliang Teacher's card:
China Spark first person
Sina Weibo: Http://weibo.com/ilovepains
Public Number: Dt_spark
Blog: http://blog.sina.com.cn/ilovepains
qq:1740415547
YY classroom: Daily 20:00 live teaching channel 68917580
Spark version customization Eight: Spark streaming source interpretation of the Rdd generation full life cycle thorough research and thinking