Install the Hadoop2.2 plugin on Eclipse and test the development

Source: Internet
Author: User
Tags static class

First, the premise work Hadoop build complete
Second, the development environment Win7 64-bit eclipse3.4.2hadoop2.2
Iii. Commencement of work
1. Unzip the hadoop-2.2.0.tar.gz locally, configure the Hadoop_home environment variable, and configure%hadoop_home%\bin to path. 2, download Hadoop-common-2.2.0-bin-master.zip, path: Https://github.com/srccodes/hadoop-common-2.2.0-bin unzip the file and put it in%hadoop_ The Home%\bin directory.
3. Open eclipse with details configured as follows




Iv. Development and testing engineering
1, test the code, you can choose WordCount.
Package Test;import java.io.IOException;  Import Java.util.StringTokenizer;  Import org.apache.hadoop.conf.Configuration;  Import Org.apache.hadoop.fs.Path;  Import org.apache.hadoop.io.IntWritable;  Import Org.apache.hadoop.io.Text;  Import org.apache.hadoop.mapred.JobConf;  Import Org.apache.hadoop.mapreduce.Job;  Import Org.apache.hadoop.mapreduce.Mapper;  Import Org.apache.hadoop.mapreduce.Reducer;  Import Org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  Import Org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  Import Org.apache.hadoop.util.GenericOptionsParser;      public class Wordcounttest {/** * Mapreducebase class: Implements the base class for the Mapper and Reducer interfaces (where the method simply implements the interface without doing anything) * Mapper Interface: * Writablecomparable Interface: Classes that implement writablecomparable can be compared to each other.      All classes that are used as keys should implement this interface.       * Reporter can be used to report the running progress of the entire application, which is not used in this example.            * */public static class Tokenizermapper extends Mapper<object, text, text, intwritable>{ /** * longwritable, intwritable, Text allis a class implemented in Hadoop to encapsulate Java data types that implement the Writablecomparable interface, which is serializable to facilitate data exchange in a distributed environment, and you can consider them as alternatives to long,int,string, respectively.        。     */private final static intwritable one = new intwritable (1); Private text Word = new text ();//text implements a map method in which the Binarycomparable class can be used as a key value/** * Mapper interface: * void MA P (K1 key, V1 value, outputcollector<k2,v2> output, Reporter Reporter) * Map a single input k/v pair to an intermediate k/v to * output pair not required and lost      In pairs is the same type, the input pair can be mapped to 0 or more output pairs.      * Outputcollector Interface: Collects <k,v> pairs of mapper and reducer outputs. * Outputcollector interface Collect (k, V) method: Add a (k,v) pair to output */public void map (Object key, Text value, Context C            Ontext) throws IOException, interruptedexception {/** * raw data: * C + + java Hello            World Java Hello me too map phase, data as input value for map: Key is offset 0 C + + java Hello        World Java Hello too me *//**  * The following parsing key values are parsed to form the output data in key-value pairs format as follows: The former is a key sequence, the latter number is the value * C + + 1 * Java 1 * Hello 1 * World 1 * Java 1 * Hello 1 * You 1 * Me 1 * Too 1 * These data as the output of reduce Out data */stringtokenizer ITR = new StringTokenizer (value.tostring ());//Get What value System.out.println ("Value what       Things: "+value.tostring ());             SYSTEM.OUT.PRINTLN ("Key What Thing:" +key.tostring ());                while (Itr.hasmoretokens ()) {Word.set (Itr.nexttoken ());       Context.write (Word, one); }}} public static class Intsumreducer extends Reducer<text,intwritable,text,intwritable> {private in      twritable result = new intwritable (); /** * Reduce process is to parse the input data into the following format data: * (c + + [1]) * (Java [+]) * (hello []) * (World [1]) * ( You [1]) * (I [1]) * (you [1]) * Analysis data data for the following implementation of the reduce program * */public void reduce (Text key, Iterable<intwritaBle> values, Context context) throws IOException, interruptedexception {int sum = 0;        /** * Own implementation of the reduce method analysis input data * form the data format as follows and store * C + + 1 * Hello 2 * Java 2 * Me 1 * Too 1 * World 1 * 1 * */for (        Intwritable val:values) {sum + = Val.get ();        } result.set (sum);      Context.write (key, result); }} public static void Main (string[] args) throws Exception {/** * jobconf:map/reduce job configuration class, to       The Hadoop framework describes the work performed by Map-reduce * Construction methods: jobconf (), jobconf (Class exampleclass), jobconf (Configuration conf), etc. */   Fill in the catalogue of input analysis and the output directory of the results according to your own situation args = new string[2];   Args[0] = "hdfs://192.168.13.33:9000/in";      ARGS[1] = "HDFS://192.168.13.33:9000/OUT5";     Configuration conf = new configuration ();   Conf.set ("Fs.defaultfs", "hdfs://master.hadoop:9000"); Conf.set ("Hadoop. Job.user "," root ");//conf.set (" Mapreduce.framework.name "," yarn "); Conf.set ("Mapred.job.tracker", "192.168.1.187:9001"); Use the following settings without this setting, this setting is the old version of the settings, the use of hadoop2.3.0, to view the official configuration document found in the following mapreduce.jobtracker.address configuration address//conf.set (" Mapreduce.jobtracker.address "," 192.168.13.33:9001 "); Conf.set ("Yarn.resourcemanager.hostname", "Master.hadoop");//conf.set ("Yarn.resourcemanager.admin.address", " 192.168.13.33:8033 "),//conf.set (" Yarn.resourcemanager.address "," 192.168.13.33:8032 ");//conf.set (" Yarn.resourcemanager.resource-tracker.address "," 192.168.13.33:8031 ");//conf.set ("         Yarn.resourcemanager.scheduler.address "," 192.168.13.33:8030 ");     string[] Otherargs = new Genericoptionsparser (conf, args). Getremainingargs ();   for (String S:otherargs) {System.out.println (s); }//Here you need to configure parameters that are input and output the HDFs file path if (otherargs.length! = 2) {System.err.println ("Usage:wordcount <in> <       Out> ");     System.exit (2);     }//Jobconf CONF1 = new jobconf (wordcount.class); JobJob = new Job (conf, "word count"),//job (Configuration conf, String jobName) set job name and Job.setjarbyclass (wordcounttest.c     LASS); Job.setmapperclass (Tokenizermapper.class); Set the Mapper class Job.setcombinerclass (Intsumreducer.class) for the job; Set the Combiner class Job.setreducerclass (Intsumreducer.class) for the job;        Set the Reduce class Job.setoutputkeyclass (Text.class) for the job; Sets the type of output key Job.setoutputvalueclass (Intwritable.class);//sets the type of output value fileinputformat.addinputpath (Job, New Pat H (Otherargs[0])); Set the input path Fileoutputformat.setoutputpath (Job, New Path (Otherargs[1]) for the Map-reduce task settings InputFormat implementation Class;//For Map-reduc   e Task set OutputFormat implementation class set Output path System.exit (Job.waitforcompletion (true)? 0:1); }  }


2, right-click on Hadoop and select Run. 3, the following console will enter the log to view the directory and out files through HDFs.
Reference: hadoop2.2 Learning 3 Installing the Hadoop plugin on Eclipse http://blog.163.com/gibby_l/blog/static/8300316120140180555754/
Development of hadoop2.x Map/reduce project in Eclipse http://www.micmiu.com/bigdata/hadoop/hadoop2x-eclipse-mapreduce-demo/
Configure the Hadoop development environment (ECLIPSE) http://blog.csdn.net/zythy/article/details/17397153

Hadoop Learning 30: Win7 Eclipse Debug CentOS Hadoop2.2-mapreducehttp://www.tuicool.com/articles/ajuzrq
Hadoop-common-2.2.0-binhttps://github.com/srccodes/hadoop-common-2.2.0-bin/tree/master/bin

Install the Hadoop2.2 plugin on Eclipse and test the development

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.