spark-shell初體驗

來源:互聯網
上載者:User

標籤:hadoop   spark   terminal   ubuntu   

1、複製檔案至HDFS:[email protected]:/usr/local/hadoop$ bin/hdfs dfs -mkdir /user
[email protected]:/usr/local/hadoop$ bin/hdfs dfs -mkdir /user/hadoop
[email protected]:/usr/local/hadoop$ bin/hdfs dfs -copyFromLocal /usr/local/spark/spark-1.3.1-bin-hadoop2.4/README.md /user/hadoop/

2、運行spark-shell
3、讀取檔案統計spark這個詞出現次數scala> sc
res0: org.apache.spark.SparkContext = [email protected]

scala> val file = sc.textFile("hdfs://Mhadoop:9000/user/hadoop/README.md")file: org.apache.spark.rdd.RDD[String] = hdfs://Mhadoop:9000/user/hadoop/README.md MapPartitionsRDD[1] at textFile at <console>:21

file變數是一個MapPartitionsRDD;接著過濾spark這個詞
scala> val sparks = file.filter(line => line.contains("spark"))
sparks: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at filter at <console>:23

統計spark出現次數,結果為11:scala> sparks.count另開一個terminal用ubuntu內建的wc命令驗證下:
[email protected]:/usr/local/spark/spark-1.3.1-bin-hadoop2.4$ grep spark README.md|wc
     11      50     761

4、執行spark cache看下效率提升scala> sparks.cache
res3: sparks.type = MapPartitionsRDD[2] at filter at <console>:23
登入控制台: http://192.168.85.10:4040/stages/
可見cache之後,耗時從s變為ms,效能提升明顯。

spark-shell初體驗

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.