標籤:領域 partition ace pack art util detail 檔案 lis
前言
說明的是,本博文,是在以下的博文基礎上,立足於它們,致力於我的大資料領域!
http://kongcodecenter.iteye.com/blog/1231177
http://blog.csdn.net/u010376788/article/details/51337312
http://blog.csdn.net/arkblue/article/details/7897396
第一種:普通做法
首先,編號寫WordCount.scala程式。
然後,打成jar包,命名為WC.jar。比如,我這裡,是匯出到windows案頭。
其次,上傳到linux的案頭,再移動到hdfs的/目錄。
最後,在spark安裝目錄的bin下,執行
spark-submit \
> --class cn.spark.study.core.WordCount \
> --master local[1] \
> /home/spark/Desktop/WC.jar \
> hdfs://SparkSingleNode:9000/spark.txt \
> hdfs://SparkSingleNode:9000/WCout
第二種:進階做法
有時候我們在Linux中運行Java程式時,需要調用一些Shell命令和指令碼。而Runtime.getRuntime().exec()方法給我們提供了這個功能,而且Runtime.getRuntime()給我們提供了以下幾種exec()方法:
不多說,直接進入。
步驟一: 為了規範起見,命名為JavaShellUtil.java。在本地裡寫好
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.List;
public class JavaShellUtil {
public static void main(String[] args) throws Exception {
String cmd="hdfs://SparkSingleNode:9000/spark.txt";
InputStream in = null;
try {
Process pro =Runtime.getRuntime().exec("sh /home/spark/test.sh "+cmd);
pro.waitFor();
in = pro.getInputStream();
BufferedReader read = new BufferedReader(new InputStreamReader(in));
String result = read.readLine();
System.out.println("INFO:"+result);
} catch (Exception e) {
e.printStackTrace();
}
}
}
package cn.spark.study.core
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
/**
* @author Administrator
*/
object WordCount {
def main(args: Array[String]) {
if(args.length < 2){
println("argument must at least 2")
System.exit(1)
}
val conf = new SparkConf()
.setAppName("WordCount")
// .setMaster("local");//local就是 不是分布式的檔案,即windows下和linux下
val sc = new SparkContext(conf)
val inputPath=args(0)
val outputPath=args(1)
val lines = sc.textFile(inputPath, 1)
val words = lines.flatMap { line => line.split(" ") }
val pairs = words.map { word => (word, 1) }
val wordCounts = pairs.reduceByKey { _ + _ }
wordCounts.collect().foreach(println)
wordCounts.repartition(1).saveAsTextFile(outputPath)
}
}
步驟二:編寫好test.sh指令碼
[email protected]:~$ cat test.sh
#!/bin/sh
/usr/local/spark/spark-1.5.2-bin-hadoop2.6/bin/spark-submit \
--class cn.spark.study.core.WordCount \
--master local[1] \
/home/spark/Desktop/WC.jar \
$1 hdfs://SparkSingleNode:9000/WCout
步驟三:上傳JavaShellUtil.java,和打包好的WC.jar
[email protected]:~$ pwd
/home/spark
[email protected]:~$ ls
Desktop Downloads Pictures Templates Videos
Documents Music Public test.sh
[email protected]:~$ cd Desktop/
[email protected]:~/Desktop$ ls
JavaShellUtil.java WC.jar
[email protected]:~/Desktop$ javac JavaShellUtil.java
[email protected]:~/Desktop$ java JavaShellUtil
INFO:(hadoop,1)
[email protected]:~/Desktop$ cd /usr/local/hadoop/hadoop-2.6.0/
步驟四:查看輸出結果
[email protected]:/usr/local/hadoop/hadoop-2.6.0$ bin/hadoop fs -cat /WCout/par*
(hadoop,1)
(hello,5)
(storm,1)
(spark,1)
(hive,1)
(hbase,1)
[email protected]:/usr/local/hadoop/hadoop-2.6.0$
成功!
關於
Shell 傳遞參數
見
http://www.runoob.com/linux/linux-shell-passing-arguments.html
最後說的是,不局限於此,可以穿插在以後我們生產業務裡的。作為調用它即可,非常實用!
Java調用Shell命令和指令碼,致力於hadoop/spark叢集