mapreduce eclipse plugin使用

來源:互聯網
上載者:User

原生環境如下:

Eclipse 3.6

Hadoop-0.20.2

Hive-0.5.0-dev

1. 安裝hadoop-0.20.2-eclipse-plugin的外掛程式。注意:Hadoop目錄中的/hadoop-0.20.2/contrib /eclipse-plugin/hadoop-0.20.2-eclipse-plugin.jar在Eclipse3.6下有問題,無法在 Hadoop Server上運行,可以從http://code.google.com/p/hadoop-eclipse-plugin/下載

2. 選擇Map/Reduce視圖:window ->  open pers.. ->  other.. ->  map/reduce

3. 增加DFS Locations:點擊Map/Reduce Locations—> New Hadoop Loaction,填寫對應的host和port

12345678910
Map/Reduce Master:   Host: 10.10.xx.xx Port: 9001   DFS Master:   Host: 10.10.xx.xx(選中 User M/R Master host即可)   Port: 9000   User name: root 更改Advance parameters 中的 hadoop.job.ugi, 預設是 DrWho,Tardis, 改成:root,Tardis。如果看不到選項,則使用Eclipse -clean重啟Eclipse 否則,可能會報錯org.apache.hadoop.security.AccessControlException

4. 設定原生Host:

12345
10.10.xx.xx zw-hadoop-master. zw-hadoop-master    #注意後面需要還有一個zw-hadoop-master.,否則運行Map/Reduce時會報錯:   java.lang.IllegalArgumentException: Wrong FS: hdfs://zw-hadoop-master:9000/user/root/oplog/out/_temporary/_attempt_201008051742_0135_m_000007_0, expected: hdfs://zw-hadoop-master.:9000       at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352)

5. 建立一個Map/Reduce Project,建立Mapper,Reducer,Driver類,注意,自動產生的程式碼是基於老版本的Hadoop,自己修改:

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081
package com.sohu.hadoop.test;    import java.util.StringTokenizer;   import org.apache.hadoop.io.IntWritable;   import org.apache.hadoop.io.Text;   import org.apache.hadoop.mapreduce.Mapper;    public class MapperTest extends Mapper<Object, Text, Text, IntWritable> {       private final static IntWritable one = new IntWritable(1);        public void map(Object key, Text value, Context context)               throws IOException, InterruptedException {           String userid = value.toString().split("[|]")[2];           context.write(new Text(userid), new IntWritable(1));       }   }     package com.sohu.hadoop.test;    import java.io.IOException;   import org.apache.hadoop.io.IntWritable;   import org.apache.hadoop.io.Text;   import org.apache.hadoop.mapreduce.Reducer;    public class ReducerTest extends Reducer<Text, IntWritable, Text, IntWritable> {        private IntWritable result = new IntWritable();        public void reduce(Text key, Iterable<IntWritable> values, Context context)               throws IOException, InterruptedException {           int sum = 0;           for (IntWritable val : values) {               sum += val.get();           }           result.set(sum);           context.write(key, result);       }   }     package com.sohu.hadoop.test;    import org.apache.hadoop.conf.Configuration;   import org.apache.hadoop.fs.Path;   import org.apache.hadoop.io.IntWritable;   import org.apache.hadoop.io.Text;   import org.apache.hadoop.io.compress.CompressionCodec;   import org.apache.hadoop.io.compress.GzipCodec;   import org.apache.hadoop.mapreduce.Job;   import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;   import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;   import org.apache.hadoop.util.GenericOptionsParser;    public class DriverTest {       public static void main(String[] args) throws Exception {           Configuration conf = new Configuration();           String[] otherArgs = new GenericOptionsParser(conf, args)                   .getRemainingArgs();           if (otherArgs.length != 2)            {               System.err.println("Usage: DriverTest <in> <out>");               System.exit(2);           }           Job job = new Job(conf, "Driver Test");           job.setJarByClass(DriverTest.class);           job.setMapperClass(MapperTest.class);           job.setCombinerClass(ReducerTest.class);           job.setReducerClass(ReducerTest.class);           job.setOutputKeyClass(Text.class);           job.setOutputValueClass(IntWritable.class);            conf.setBoolean("mapred.output.compress", true);           conf.setClass("mapred.output.compression.codec", GzipCodec.class,CompressionCodec.class);            FileInputFormat.addInputPath(job, new Path(otherArgs[0]));           FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));            System.exit(job.waitForCompletion(true) ? 0 : 1);       }   }

6. 在DriverTest上,點擊Run As —> Run on Hadoop,選擇對應的Hadoop Locaion即可

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.