The local environment is as follows:
Eclipse 3.6
Hadoop-0.20.2
Hive-0.5.0-dev
1. Install plug-ins for hadoop-0.20.2-eclipse-plugin. Note: The/hadoop-0.20.2/contrib/eclipse-plugin/hadoop-0.20.2-eclipse-plugin.jar in the hadoop directory has a problem in eclipse3.6 and cannot run on hadoop server, can be downloaded from http://code.google.com/p/hadoop-eclipse-plugin/
2. Select the MAP/reduce view: window-> open pers...-> other...-> MAP/reduce
3. Add DFS locations: click map/reduce locations-> New hadoop loaction, and fill in the corresponding host and Port
12345678910 |
MAP/reduce master: Host: 10.10.xx.xx port: 9001 DFS master: Host: 10.10.xx.xx (Select User M/R master host) Port: 9000 User name: Root to change hadoop in advance parameters. job. ugi. The default value is drwho and TARDIS. Change it to root and TARDIS. If no options are displayed, use eclipse-clean to restart eclipse. Otherwise, the org. Apache. hadoop. Security. accesscontrolexception error may be reported. |
4. Set the host of the local machine:
12345 |
10.10.xx.xx ZW-hadoop-master. ZW-hadoop-master # Note that there must be a ZW-hadoop-master later ., otherwise, an error occurs when running MAP/reduce: Java. lang. illegalargumentexception: Wrong FS: HDFS: // ZW-hadoop-MASTER: 9000/user/root/oplog/out/_ temporary/_ partition, expected: HDFS: // ZW-hadoop-master. 9000 at Org. apache. hadoop. FS. filesystem. checkpath (filesystem. java: 352) |
5. Create a map/reduce project and create Mapper, CER, and driver classes. Note that the automatically generated code is based on the old version of hadoop and you can modify it yourself:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081 |
package com.sohu.hadoop.test; import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class MapperTest extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String userid = value.toString().split("[|]")[2]; context.write(new Text(userid), new IntWritable(1)); } } package com.sohu.hadoop.test; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class ReducerTest extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } package com.sohu.hadoop.test; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.compress.CompressionCodec; import org.apache.hadoop.io.compress.GzipCodec; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class DriverTest { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: DriverTest <in> <out>"); System.exit(2); } Job job = new Job(conf, "Driver Test"); job.setJarByClass(DriverTest.class); job.setMapperClass(MapperTest.class); job.setCombinerClass(ReducerTest.class); job.setReducerClass(ReducerTest.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); conf.setBoolean("mapred.output.compress", true); conf.setClass("mapred.output.compression.codec", GzipCodec.class,CompressionCodec.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } |
6. On drivertest, click Run as-> run on hadoop and select the corresponding hadoop locaion.