1. Join for different time slice data streams
After the first experience, I looked at Spark WebUi's log and found that because spark streaming needed to run every second to calculate the data in real time, the program had to read HDFs every second to get the data for the inner join.
Sparkstreaming would have cached the data it was processing to reduce IO and increase the speed of computation, but since our scenario now is to inner join with data streams that have new data every second and about 2 weeks to update the HDFS data, That is, the time interval for the data to be cached by the two data streams is different, and the data stream from Kafka is sufficient to cache at most 10 seconds, but the data from HDFs will need to be cached for at least one week due to the slow update. However, the native spark streaming can only support the time interval to join with the same data stream between the buffers, which will result in the need to go to HDFs to read 3G of data per second or every few seconds, thus greatly affecting the real-time processing of the program. (Of course
"It's so sweet!", so we had to extend dstream (to be exact, to extend Fileinputdstream, to write a latestfileinputdstream for the integration of data streams used to handle different time slices.
Sparkstreaming will do a cache or checkpoint at intervals of duration or remember time, in order to solve the problem of inconsistent cache and checkpoint interval time for different time slices, We need to rewrite the logic of the cache and checkpoint in the latestfileinputdstream of the extended fileinputdstream so that it can read the latest data from HDFs in less than two weeks, The inner Join is then carried out with the data pouring in from Kafka every second.
The checkpoing logic of sparkstreaming is mostly encapsulated in the Dstreamcheckpointdata class:
We need to rewrite and extend the Dstreamcheckpointdata method to avoid sparkstreaming caching in seconds.
The Generatedrdds in Dstream will also save the RDD that the dstream used to parse.
But Dstream does not provide the relevant extension mechanism, we do not want to the original spark extension too intrusive, and did not modify the Generatedrdds mechanism, fortunately, it is only the RDD reference, not the calculated data, Otherwise, if you save 3G of data per second, then more memory will be burst.
Although we only extend the checkpoint mechanism, and do not move dstream. Generatedrdds caching mechanism, but we also implement their own timing caching mechanism, to avoid being affected by Sparkdstream's second-level cache. Finally, we have our own latestfileinputdstream that can support different time intervals.
With Latestfileinputdstream to replace the original spark Fileinputdstream, the speed greatly improved, at least not to wait for hours at the screen, mother can call me home for dinner.