Original address: http://www.linuxidc.com/Linux/2014-11/109200.htm
Graphic details on Windows 8.0 on the Eclipse 4.4.0 configuration CentOS 6.5 on the Hadoop2.2.0 development environment, to the needs of friends reference learning.
Eclipse's Hadoop plugin: https://github.com/winghc/hadoop2x-eclipse-plugin
Unzip the downloaded package, throw the hadoop-eclipse-kepler-plugin-2.2.0 jar into the Dropins directory below Eclipse, and restart eclipse
Enter the Windows->preference configuration root directory
, the Hadoop installation directory in this is not the Hadoop directory you installed on Windows, but just the source code you compiled on CentOS, the decompression path on Windows, This path is just for the jar that is needed to create MapReduce project to automatically introduce MapReduce from this place
Enter Window-->open perspective-->other-->map/reduce open map/reduce window
Found it
, right-click Select, New Hadoop location, this time it will appear
The configuration in Map/reduce (V2) corresponds to the port configuration in Mapred-site.xml, and the configuration in DFS master corresponds to the port configuration in the Core-site.xml, which can be done after the configuration is complete.
Test, create a new MapReduce project,
To solve this problem, you have to complete the following steps, configure Hadoop_home on Windows, then add%hadoop_home%\bin to Path, and then go to https://github.com/srccodes/ Hadoop-common-2.2.0-bin download One, after the download, the contents of this bin directory will be copied to your own Windows Hadoop bin directory, overwriting can, at the same time add Hadoop.dll to the C disk under the System32, if these are completed or Hit: Exception in Thread "main" java.lang.unsatisfiedlinkerror:org.apache.hadoop.io.nativeio.nativeio$ WINDOWS.ACCESS0 (ljava/lang/string;i) Z, then check your JDK, which could be caused by a 32-bit JDK and need to download a 64-bit JDK installation, and configure the JRE environment in eclipse as your newly installed 64-bit JRE environment
。 If my jre1.8 is 64 bit, Jre7 is 32 bit, if not, you can add directly, select your 64-bit JRE environment, will appear.
Then write a wordcount program test, post My code as follows, if you have already built the input file in HDFs, and put some content inside
Import java.io.IOException;
Import Java.util.StringTokenizer;
Import org.apache.hadoop.conf.Configuration;
Import Org.apache.hadoop.fs.Path;
Import org.apache.hadoop.io.IntWritable;
Import Org.apache.hadoop.io.Text;
Import Org.apache.hadoop.mapreduce.Job;
Import Org.apache.hadoop.mapreduce.Mapper;
Import Org.apache.hadoop.mapreduce.Reducer;
Import Org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
Import Org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
Import Org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class Tokenizermapper extends Mapper<object, text, text, intwritable> {
Private final static intwritable one = new intwritable (1);
Private text Word = new text ();
public void Map (Object key, Text value, Context context) throws IOException, Interruptedexception {
StringTokenizer ITR = new StringTokenizer (value.tostring ());
while (Itr.hasmoretokens ()) {
Word.set (Itr.nexttoken ());
Context.write (Word, one);
}
}
}
public static class Intsumreducer extends Reducer<text, intwritable, Text, intwritable> {
Private intwritable result = new intwritable ();
public void reduce (Text key, iterable<intwritable> values, context context) throws IOException, interruptedexception {
int sum = 0;
for (intwritable val:values) {
Sum + = Val.get ();
}
Result.set (sum);
Context.write (key, result);
}
}
public static void Main (string[] args) throws Exception {
System.setproperty ("Hadoop.home.dir", "e:\\hadoop2.2\\");
Configuration conf = new configuration ();
string[] Otherargs = new Genericoptionsparser (conf, args). Getremainingargs ();
if (otherargs.length! = 2) {
System.err.println ("Usage:wordcount <in> <out>");
System.exit (2);
// }
Job Job = new Job (conf, "word count");
Job.setjarbyclass (Wordcount.class);
Job.setmapperclass (Tokenizermapper.class);
Job.setcombinerclass (Intsumreducer.class);
Job.setreducerclass (Intsumreducer.class);
Job.setoutputkeyclass (Text.class);
Job.setoutputvalueclass (Intwritable.class);
Fileinputformat.addinputpath (Job, New Path ("Hdfs://master:9000/input"));
Fileoutputformat.setoutputpath (Job, New Path ("Hdfs://master:9000/output"));
Boolean flag = Job.waitforcompletion (true);
System.out.print ("succeed!" + flag);
System.exit (flag 0:1);
System.out.println ();
}
}
Windows 8.0 on Eclipse 4.4.0 Configuration Hadoop2.2.0 Development environment on CentOS 6.5