Spark Streaming flow calculation optimization record (2)-Join for different time slice data streams

Source: Internet
Author: User
Tags extend time interval

1. Join for different time slice data streams

After the first experience, I looked at Spark WebUi's log and found that because spark streaming needed to run every second to calculate the data in real time, the program had to read HDFs every second to get the data for the inner join.

Sparkstreaming would have cached the data it was processing to reduce IO and increase the speed of computation, but since our scenario now is to inner join with data streams that have new data every second and about 2 weeks to update the HDFS data, That is, the time interval for the data to be cached by the two data streams is different, and the data stream from Kafka is sufficient to cache at most 10 seconds, but the data from HDFs will need to be cached for at least one week due to the slow update. However, the native spark streaming can only support the time interval to join with the same data stream between the buffers, which will result in the need to go to HDFs to read 3G of data per second or every few seconds, thus greatly affecting the real-time processing of the program. (Of course

"It's so sweet!", so we had to extend dstream (to be exact, to extend Fileinputdstream, to write a latestfileinputdstream for the integration of data streams used to handle different time slices.

Sparkstreaming will do a cache or checkpoint at intervals of duration or remember time, in order to solve the problem of inconsistent cache and checkpoint interval time for different time slices, We need to rewrite the logic of the cache and checkpoint in the latestfileinputdstream of the extended fileinputdstream so that it can read the latest data from HDFs in less than two weeks, The inner Join is then carried out with the data pouring in from Kafka every second.

The checkpoing logic of sparkstreaming is mostly encapsulated in the Dstreamcheckpointdata class:


We need to rewrite and extend the Dstreamcheckpointdata method to avoid sparkstreaming caching in seconds.

The Generatedrdds in Dstream will also save the RDD that the dstream used to parse.


But Dstream does not provide the relevant extension mechanism, we do not want to the original spark extension too intrusive, and did not modify the Generatedrdds mechanism, fortunately, it is only the RDD reference, not the calculated data, Otherwise, if you save 3G of data per second, then more memory will be burst.


Although we only extend the checkpoint mechanism, and do not move dstream. Generatedrdds caching mechanism, but we also implement their own timing caching mechanism, to avoid being affected by Sparkdstream's second-level cache. Finally, we have our own latestfileinputdstream that can support different time intervals.


With Latestfileinputdstream to replace the original spark Fileinputdstream, the speed greatly improved, at least not to wait for hours at the screen, mother can call me home for dinner.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.