Spark shell frequency statistics and statistics PV experience

Source: Internet
Author: User

all the process according to my experiment

And I can accept the way to understand, we can refer to, if there is a problem, please leave a message.

Sample Data

[email protected] ~]$ cat Hh.txt

Hello,world

Hello,hadoop

Hello,oracle

Hadoop,oracle

Hello,world

Hello,hadoop

Hello,oracle

Hadoop,oracle

The word frequency statistic and its reverse order by the number of words and its explanation

1. load the file into an RDD

scala> var file=sc.textfile ("hdfs://h201:9000/hh.txt")

2. split each row by a comma, the result is loaded into an array, each time a word is extracted, _ represents a placeholder for each input content

scala> Val H1=file.flatmap (_.split (","))

3. Load each element in the array into the map method to perform a unified processing Task , returning each word entered as a k,v key value pair , the Reducebykey () method iterates only on the method in which value is only run in parentheses, _+_ represents the summation, and the return is K and iterative calculations. v Key-value pairs

Scala> Val H2=h1.map (x=> (x,1)). Reducebykey (_+_)

4.  re-use the second map Receive the previous step of k,v key-value pairs to exchange position output for example:

the input is ( " Hello " , 5) becomes (5, " Hello " )

Scala> Val H3=h2.map (_.2,_.1)

5. Sort The results by key value

Scala> Val H4=h4.sortbykey (false) false= reverse true= Ascending

6. use the map function to exchange a well-ordered key-value pair for example:

(5,"Hello") (4,"Hadoop") becomes ( "Hello", 5) ("Hadoop", 4)

Scala> Val H5=h4.map (_.2,_.1)

7. the Word frequency statistics have been completed and arranged in descending order of the number of words. The next step is to output the results to a folder, note that it is a directory

scala> h5. saveastextfile ("hdfs://h201: 9000/output1")

All of the above operations are split for ease of understanding, you can synthesize all operations into one code:

Scala > Val WC = file.flatmap (_.split (",")). Map (x=> (x,1)). Reducebykey (_+_). Map (x=> (x._2,x._1)). Sortbykey (FALSE). Map (x=> (x._2,x._1)). Saveastextfile ("hdfs://h201:9000/output1" )

The difference between FLATMAP () and map ()

Both FlatMap () and map () do the same thing for each line of input but produce different results;

Example Sample:

Hello,world

Hello,hadoop

Hello,oracle

import the file into RDD = "var file=sc.textfile ("hdfs://xxx:9000/xx.txt")

the same is separated by a comma using the split method

Var Fm=file.flatmap (_.split (",")) each row is separated by a comma, resulting in a solution that places each word in a collection, below if you use The content in FM is to import only one word at a time:

The java representation is { 'Hello',' World','Hello','Hadoop','Hello','Oracle'} is equivalent to a one-dimensional array

Var M=file.map (_.split (",")) each row is separated by a comma, resulting in a string array of each row, and then into a large result set , If you use the contents of M to import an array at a time:

The java representation is {{ 'Hello',' World'},{'Hello','Hadoop'},{'Hello','Oracle'}} is equivalent to a two-D array

This is used Apache Log Statistics PV is useful for example the log format is as follows:

123.23.4.5--Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

23.12.4.5--Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

We just need to remove the first column separated by a space. It's not appropriate to use FLATMAP. We can use map

Salca > var file=sc.textfile ("hdfs://h201:9000/access.log")

salca> var h1=file.map (_.split (" ", 2)) # separate up to two columns by Space

salca> var h2=h1.map (x=> (x (0), 1) # input array go to column 0 to remove IP

salca> var h3=h2.reducebykey (_+_) # statistics PV

The following is the sort and save here is not duplicated.


Spark shell frequency statistics and statistics PV experience

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.