First, install EclipseIi. Installing the Hadoop plugin on Eclipse
1. Download the Hadoop plugin
http://download.csdn.net/detail/tondayong1981/8680589
2, put the plug-in into the Eclipse/plugins directory
3. Restart Eclipse, configure Hadoop installation directory
If the plug-in installation succeeds, after opening windows-preferences, there will be a Hadoop map/reduce option on the left side of the window, click this option to set the Hadoop installation path on the right side of the window.
650) this.width=650; "align=" left "src=" Http://images.cnitblog.com/blog/12097/201406/221638036142196.png "/>
4, Configuration Map/reduce Locations
Open Windows-open Perspective-other
650) this.width=650; "align=" left "src=" Http://images.cnitblog.com/blog/12097/201406/221638111614501.png "/>
Select Map/reduce, click OK
in the lower right, see as shown
650) this.width=650; "align=" left "src=" Http://images.cnitblog.com/blog/12097/201406/221638135821675.png "/>
Click on the Map/reduce Location tab and click on the icon on the right to open the Hadoop location Configuration window:
Enter location name, any name. Configure Map/reduce Master and DFS Mastrer,host and port to be configured to match core-site.xml settings. (Looks like Map/reduce Master's port setting any number can be?) )
650) this.width=650; "align=" left "src=" Http://images.cnitblog.com/blog/12097/201406/221638231148553.png "/>
650) this.width=650; "align=" left "src=" Http://images.cnitblog.com/blog/12097/201406/221638251149897.png "/>
Click the "Finish" button to close the window.
Click on the left Dfslocations->myhadoop (location name in the previous step), if you can see user, the installation is successful.
650) this.width=650; "align=" left "src=" http://images.cnitblog.com/blog/12097/201406/ 221638267542900.png "/>
If the installation fails as shown, check to see if Hadoop is started and the eclipse is configured correctly.
650) this.width=650; "align=" left "src=" Http://images.cnitblog.com/blog/12097/201406/221638272549986.png "/>
iii. New WordCount Project
file- > Project , select Map/reduce Project, enter the name WordCount, and so on.
Create a new class in the WordCount project named WordCount with the following code:
import java.io.ioexception;import java.util.stringtokenizer; import org.apache.hadoop.conf.configuration;import org.apache.hadoop.fs.path;import org.apache.hadoop.io.intwritable;import org.apache.hadoop.io.text;import org.apache.hadoop.mapreduce.job;import org.apache.hadoop.mapreduce.mapper;import Org.apache.hadoop.mapreduce.reducer;import org.apache.hadoop.mapreduce.lib.input.fileinputformat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.genericoptionsparser; public class wordcount { public static class tokenizermapper extends mapper<object, text, text, intwritable>{ private final static intwritable one = new Intwritable (1); Private text word = new text (); Public void map (Object key, Text value, context context) throws ioexception, interruptedexception {StringTokenizer itr = new stringtokenizer (Value.tostring ()); while (Itr.hasmoretokens ()) {Word.set (Itr.nexttoken ()); Context.write (Word, one); }}} public static class intsumreducer extends reducer<text,intwritable,text, intwritable> {private intwritable result = new intwritable (); Public void reduce (Text key, iterable<intwritable> values,context context) throws ioexception, interruptedexception {int sum = 0; for (intwritable val : values) {sum += val.get (); } result.set (sum); Context.write (Key, result); }} public static void main (String[] args) throws exception { Configuration conf = neW configuration (); String[] otherargs = new genericoptionsparser (Conf, args). GetRemainingArgs (); if (otherargs.length != 2) {System.err.println ("usage: wordcount <in> <out> "); System.exit (2); } job job = new job (conf, "Word count"); Job.setjarbyclass (Wordcount.class); Job.setmapperclass (Tokenizermapper.class); Job.setcombinerclass (Intsumreducer.class); Job.setreducerclass (Intsumreducer.class); Job.setoutputkeyclass (Text.class); Job.setoutputvalueclass (Intwritable.class); Fileinputformat.addinputpath (Job, new path (otherargs[0)); Fileoutputformat.setoutputpath (Job, new path (otherargs[1)); System.exit (Job.waitforcompletion (true) ? 0 : 1);}}
Four, the Operation
1. Create a directory on HDFS input
Hadoop fs-mkdir/user
Hadoop fs-mkdir/user/inhput
2. Copy the local README.txt to the input in HDFs
Hadoop fs-copyfromlocal/opt/hadoop/readme.txt/user/input
3, click Wordcount.java, right click on Run As->run configurations, configure the run parameters, namely the input and output folder
Hdfs://localhost:9000/user/input Hdfs://localhost:9000/user/output
650) this.width=650; "align=" left "src=" Http://images.cnitblog.com/blog/12097/201406/221638389894037.png "/ >
Click the Run button to run the program.
4, after the completion of the operation, view the results of the operation
Method 1:
Hadoop fs-ls Output
you can see that there are two output results, _success and part-r-00000
perform Hadoop fs-cat output/*
Method 2:
Expand Dfs Locations, as shown, double-click Open part-r00000 View Results
650) this.width=650; "align=" left "src=" Http://images.cnitblog.com/blog/12097/201406/221638395517609.png "/>
Reference:
Http://www.cnblogs.com/kinglau/p/3802705.html
Build Hadoop2.7.0 development environment under eclipse