spark-streaming window application, Spark streaming provides support for sliding window operations, allowing us to perform calculations on the data in a sliding window. Each time the data of the RDD that is dropped in the window is aggregated to perform the calculation operation, and the resulting rdd is used as an rdd for window dstream.
As shown in the network official diagram, a sliding window is calculated for every three seconds of data, and the 3 Rdd in 3 seconds is aggregated for processing, and then after two seconds, a sliding window calculation is performed on the data in the last three seconds. Therefore, each sliding window operation must specify two parameters, window length and sliding interval, and both parameter values must be an integer multiple of the batch interval.
Spark Streaming's support for sliding windows is more complete and powerful than storm.
Spark-streaming conversion operations supported for sliding windows:
Hot search term swipe statistics, every 10 seconds, statistics the last 60 seconds of search term search frequency, and print out the top 3 search terms and the number of occurrences
Scala version:
Package com.spark.streaming Import org.apache.spark.streaming.Seconds Import Org.apache.spark.streaming.StreamingContext Import org.apache.spark.SparkConf/** * @author Ganymede*/ Objectwindowhotwords {def main (args:array[string]): Unit={val conf=NewSparkconf (). Setappname ("Windowhotwords"). Setmaster ("Local[2]") //in Scala, the StreamingContext is createdVal SSC =NewStreamingContext (conf, Seconds (5)) Val Searchlogsdstream= Ssc.sockettextstream ("Spark1",9999) Val Searchwordsdstream= searchlogsdstream.map {Searchlog = Searchlog.split (" ")(1)} Val searchwordpairdstream= Searchwordsdstream.map {Searchword = (Searchword,1) } //Reducebykeyandwindow//The second parameter, which is the window length, which is 60 seconds//The third parameter, the sliding interval, is 10 seconds.//That is , every 10 seconds, the last 60 seconds of data, as a window, the internal Rdd aggregation, and then unified on an RDD for subsequent calculations//But just put it in there .//then, waiting for our sliding interval to the next, 10 seconds to, will be before 60 seconds of the RDD, because a batch interval is 5 seconds, so before 60 seconds, there are 12 rdd, to aggregate, and then unified execution Reducebykey operation//so the Reducebykeyandwindow here is calculated for each window, not for the RDD in a dstream//count the number of words collected every 10 seconds, 60 seconds before coming outVal Searchwordcountsdstream = Searchwordpairdstream.reducebykeyandwindow ((v1:int, v2:int) = v1 + v2, Seconds ( -), Seconds (Ten)) Val Finaldstream= Searchwordcountsdstream.transform (Searchwordcountsrdd ={val Countsearchwordsrdd= Searchwordcountsrdd.map (tuple =(tuple._2, Tuple._1)) Val Sortedcountsearchwordsrdd= Countsearchwordsrdd.sortbykey (false) Val Sortedsearchwordcountsrdd= Sortedcountsearchwordsrdd.map (tuple =(Tuple._1, tuple._2)) Val top3searchwordcounts= Sortedsearchwordcountsrdd.take (3) for(Tuple <-top3searchwordcounts) {println ("Result:"+tuple)} Searchwordcountsrdd}) Finaldstream.print () Ssc.start () Ssc.awaittermination ()}}
spark-streaming window Sliding Windows application