1. Write a Java program that counts the number of words, named Wordcount.java, with the following code:
Importjava.io.IOException;ImportJava.util.StringTokenizer;Importorg.apache.hadoop.conf.Configuration;ImportOrg.apache.hadoop.fs.Path;Importorg.apache.hadoop.io.IntWritable;ImportOrg.apache.hadoop.io.Text;ImportOrg.apache.hadoop.mapreduce.Job;ImportOrg.apache.hadoop.mapreduce.Mapper;ImportOrg.apache.hadoop.mapreduce.Reducer;ImportOrg.apache.hadoop.mapreduce.lib.input.FileInputFormat;ImportOrg.apache.hadoop.mapreduce.lib.output.FileOutputFormat; Public classWordCount { Public Static classTokenizermapperextendsMapper<object, text, text, intwritable>{ Private Final StaticIntwritable one =NewIntwritable (1); PrivateText Word =NewText (); Public voidmap (Object key, Text value, context context)throwsIOException, interruptedexception {stringtokenizer ITR=NewStringTokenizer (value.tostring ()); while(Itr.hasmoretokens ()) {Word.set (Itr.nexttoken ()); Context.write (Word, one); } } } Public Static classIntsumreducerextendsReducer<text,intwritable,text,intwritable> { Privateintwritable result =Newintwritable (); Public voidReduce (Text key, iterable<intwritable>values, context context)throwsIOException, interruptedexception {intsum = 0; for(intwritable val:values) {sum+=Val.get (); } result.set (sum); Context.write (key, result); } } Public Static voidMain (string[] args)throwsException {Configuration conf=NewConfiguration (); Job Job= Job.getinstance (conf, "word count"); Job.setjarbyclass (WordCount.class); Job.setmapperclass (tokenizermapper.class); Job.setcombinerclass (intsumreducer.class); Job.setreducerclass (intsumreducer.class); Job.setoutputkeyclass (Text.class); Job.setoutputvalueclass (intwritable.class); Fileinputformat.addinputpath (Job,NewPath (args[0])); Fileoutputformat.setoutputpath (Job,NewPath (args[1])); System.exit (Job.waitforcompletion (true) ? 0:1); }}
2. Declare the Java environment variable:
Export java_home=/usr/java/defaultexport PATH=${java_home}/bin:${path}export hadoop_ CLASSPATH=${java_home}/lib/tools.jar
Note: If you do not declare the above environment variables, you will receive an error message when you run it later:
3. Compile and create the jar package.
bin/Hadoop com.sun.tools.javac.Main wordcount.javajar CF wc.jar WordCount*. class
4. Run the third step to build the Wc.jar package. It is important to note that the output folder is not created manually and is created automatically when the system is run.
Bin/hadoop jar Wc.jar Wordcount/user/root/wordcount/input/user/root/wordcount/output
At the end of normal operation, part-r-00000 and __success two files are generated under the output folder, where the analysis results are part-r-00000 stored. To run the command:
Bin/hadoop fs-cat/user/joe/wordcount/output/part-r-00000
You can view the analysis results as shown in:
At this point, this example completes.
Hadoop version of HelloWorld WordCount Run example