原生環境如下:
Eclipse 3.6
Hadoop-0.20.2
Hive-0.5.0-dev
1. 安裝hadoop-0.20.2-eclipse-plugin的外掛程式。注意:Hadoop目錄中的/hadoop-0.20.2/contrib /eclipse-plugin/hadoop-0.20.2-eclipse-plugin.jar在Eclipse3.6下有問題,無法在 Hadoop Server上運行,可以從http://code.google.com/p/hadoop-eclipse-plugin/下載
2. 選擇Map/Reduce視圖:window -> open pers.. -> other.. -> map/reduce
3. 增加DFS Locations:點擊Map/Reduce Locations—> New Hadoop Loaction,填寫對應的host和port
12345678910 |
Map/Reduce Master: Host: 10.10.xx.xx Port: 9001 DFS Master: Host: 10.10.xx.xx(選中 User M/R Master host即可) Port: 9000 User name: root 更改Advance parameters 中的 hadoop.job.ugi, 預設是 DrWho,Tardis, 改成:root,Tardis。如果看不到選項,則使用Eclipse -clean重啟Eclipse 否則,可能會報錯org.apache.hadoop.security.AccessControlException |
4. 設定原生Host:
12345 |
10.10.xx.xx zw-hadoop-master. zw-hadoop-master #注意後面需要還有一個zw-hadoop-master.,否則運行Map/Reduce時會報錯: java.lang.IllegalArgumentException: Wrong FS: hdfs://zw-hadoop-master:9000/user/root/oplog/out/_temporary/_attempt_201008051742_0135_m_000007_0, expected: hdfs://zw-hadoop-master.:9000 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) |
5. 建立一個Map/Reduce Project,建立Mapper,Reducer,Driver類,注意,自動產生的程式碼是基於老版本的Hadoop,自己修改:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081 |
package com.sohu.hadoop.test; import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class MapperTest extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String userid = value.toString().split("[|]")[2]; context.write(new Text(userid), new IntWritable(1)); } } package com.sohu.hadoop.test; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class ReducerTest extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } package com.sohu.hadoop.test; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.compress.CompressionCodec; import org.apache.hadoop.io.compress.GzipCodec; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class DriverTest { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: DriverTest <in> <out>"); System.exit(2); } Job job = new Job(conf, "Driver Test"); job.setJarByClass(DriverTest.class); job.setMapperClass(MapperTest.class); job.setCombinerClass(ReducerTest.class); job.setReducerClass(ReducerTest.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); conf.setBoolean("mapred.output.compress", true); conf.setClass("mapred.output.compression.codec", GzipCodec.class,CompressionCodec.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } |
6. 在DriverTest上,點擊Run As —> Run on Hadoop,選擇對應的Hadoop Locaion即可