scala> val textfile = Sc.textfile ("/users/admin/spark-1.5.1-bin-hadoop2.4/readme.md") scala> val TopWord = Textfile.flatmap (_.split (")). Filter (!_.isempty). Map ((_,1). Reducebykey (_+_). Map{case (Word,count) = (count, Word)}.sortbykey (false) scala> Topword.take (5). foreach (println) Redult: (21,the)
(14,spark)
(14,to)
(12,for) (10,a)
Original reference:
Here is a simple example of the spark Scala REPL Shell:
1 |
scala> val hamlet = sc.textFile( "~/temp/gutenburg.txt" ) |
2 |
hamlet : org.apache.spark.rdd.RDD[String] = MappedRDD[ 1 ] at textFile at <console> : 12 |
In the code above, we read the file and created a string-type Rdd, each of which represents each row in the file.
1 |
scala> val topWordCount = hamlet.flatMap(str = >str.split( " " )) |
2 |
.filter(! _ .isEmpty).map(word = >(word, 1 )).reduceByKey( _ + _ ) |
3 |
.map{ case (word, count) = > (count, word)}.sortByKey( false ) |
5 |
topWordCount : org.apache.spark.rdd.RDD[(Int, String)] = MapPartitionsRDD[ 10 ] at sortByKey at <console> : 14 |
1. Through the above command we can find this operation very simple-connect transformations and actions via the simple Scala API.
2. There may be cases where some words are separated by more than 1 spaces, causing some words to be empty strings, so you need to filter them out using filter (!_.isempty).
3. Each word is mapped to a key-value pair: Map (word=> (word,1)).
4. In order to sum up all counts, a reduce step--reducebykey (_+_) needs to be called here. _+_ can be very convenient to assign values to each key.
5, we get the words and their respective counts, the next step is to do according to counts sort. In Apache Spark, users can only sort by key, not values. Therefore, you need to use Map{case (Word, count) (count, Word)} to flow (Word, count) to (count, word).
6, need to calculate the most commonly used 5 words, so need to use Sortbykey (false) to do a count of descending order.
1 |
scala> topWordCount.take( 5 ).foreach(x = >println(x)) |
The above command contains a. Take (5) (an action operation, which triggers computation) and outputs 10 of the most commonly used ~/temp/gutenburg.txt in words text.
Spark Shell: 5 Most used word found in text