Shell script -- run hadoop on linux terminal -- the java file is saved as test. sh. the java file is wc. java, [Note: It will be packaged into 1. jar, the main function class is wc, the input directory address on hdfs is input, and the output directory address on hdfs is output [Note: the input directory and output directory are not required] www.2cto.com run :. /test. sh wc. java wc input output [plain] #! /Bin/bash # echo "$ #$0 $1 $2" HH = $ HADOOP_HOME if [$ #-lt 2]; then echo "usage: jc. sh source. java ClassName [InputFile] [OutputFile] "exit 0 elif [$ {1 ##*.}! = "Java"]; then echo "Notice: source. java! "Exit 0 else rm-r. /classes/* javac-classpath $ HH/hadoop-mapred-0.22.0.jar: $ HH/hadoop-hdfs-0.22.0.jar: $ HH/hadoop-common-0.22.0.jar: $ HH/lib/commons-cli-1.2.jar-d classes. /$1 jar-cvf 1.jar-C classes /. echo "=================== Output ======================================" if [$ #-eq 2]; then hadoop jar 1.jar $2 elif [$ #-eq 3]; then hadoop jar 1.jar $2 $3 elif [$ #-eq 4]; then hadoop jar 1.jar $2 $3 $4 fi echo "========================== ================= "rm 1.jar fi