Spark Shell: 5 Most used word found in text

Source: Internet
Author: User

scala> val textfile = Sc.textfile ("/users/admin/spark-1.5.1-bin-hadoop2.4/readme.md") scala> val TopWord = Textfile.flatmap (_.split (")). Filter (!_.isempty). Map ((_,1). Reducebykey (_+_). Map{case (Word,count) = (count, Word)}.sortbykey (false) scala> Topword.take (5). foreach (println) Redult: (21,the)
(14,spark)
(14,to)
(12,for) (10,a) Original reference:

Here is a simple example of the spark Scala REPL Shell:

1 scala> valhamlet =sc.textFile("~/temp/gutenburg.txt")
2 hamlet:org.apache.spark.rdd.RDD[String] =MappedRDD[1] at textFile at <console>:12

In the code above, we read the file and created a string-type Rdd, each of which represents each row in the file.

1 scala> valtopWordCount =hamlet.flatMap(str=>str.split(" "))
2 .filter(!_.isEmpty).map(word=>(word,1)).reduceByKey(_+_)
3 .map{case(word, count) => (count, word)}.sortByKey(false)
4
5 topWordCount:org.apache.spark.rdd.RDD[(Int, String)] =MapPartitionsRDD[10] at sortByKey at <console>:14

1. Through the above command we can find this operation very simple-connect transformations and actions via the simple Scala API.
2. There may be cases where some words are separated by more than 1 spaces, causing some words to be empty strings, so you need to filter them out using filter (!_.isempty).
3. Each word is mapped to a key-value pair: Map (word=> (word,1)).
4. In order to sum up all counts, a reduce step--reducebykey (_+_) needs to be called here. _+_ can be very convenient to assign values to each key.
5, we get the words and their respective counts, the next step is to do according to counts sort. In Apache Spark, users can only sort by key, not values. Therefore, you need to use Map{case (Word, count) (count, Word)} to flow (Word, count) to (count, word).
6, need to calculate the most commonly used 5 words, so need to use Sortbykey (false) to do a count of descending order.

1 scala> topWordCount.take(5).foreach(x=>println(x))
2 (1044,the)
3 (730,and)
4 (679,of)
5 (648,to)
6 (511,I)
The above command contains a. Take (5) (an action operation, which triggers computation) and outputs 10 of the most commonly used ~/temp/gutenburg.txt in words text.

Spark Shell: 5 Most used word found in text

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.