[Hadoop] Eclipse-based Hadoop application development environment configuration

Source: Internet
Author: User

  1. Install Eclipse
    Download eclipse (click to download) to unzip the installation. I installed it under the/usr/local/software/directory.

  2. Installing the Hadoop plugin on eclipse

    Download the Hadoop plugin (click to download) and put the plugin in the Eclipse/plugins directory.

  3. Restart Eclipse, configure Hadoop installation directory

    If installing the plugin succeeds, open Window–>preferens and you will find the Hadoop map/reduce option, in which you need to configure Hadoop installation directory. Exit after configuration is complete.

  4. Configuring Map/reduce Locations

    Open the Map/reduce Locations in Window–>show view.

    Create a new Hadoop location in the map/reduce locations. In this view, right-click –>new Hadoop location. In the popup dialog you need to configure location name, such as Hadoop1.0, and Map/reduce Master and DFS master. The host and port are the addresses and ports you have configured in Mapred-site.xml, Core-site.xml, respectively. Such as:
    Map/reduce Master

    192.168.239.1309001

    DFS Master

    192.168.239.1309000

    Exit after configuration is complete. Click Dfs Locations–>hadoop If you can show the folder (2) that the configuration is correct and if "Deny connection" is displayed, please check your configuration.

  5. New WordCount Project

    File->project, select Map/reduce Project, enter the item name WordCount, and so on.
    Create a new class in the WordCount project named WordCount with the following code:

 PackageWordCount;ImportJava.io.IOException;ImportJava.util.StringTokenizer;ImportOrg.apache.hadoop.conf.Configuration;Importorg.apache.hadoop.conf.Configured;ImportOrg.apache.hadoop.fs.Path;Importorg.apache.hadoop.io.IntWritable;ImportOrg.apache.hadoop.io.Text;ImportOrg.apache.hadoop.mapreduce.Job;ImportOrg.apache.hadoop.mapreduce.Mapper;ImportOrg.apache.hadoop.mapreduce.Reducer;ImportOrg.apache.hadoop.mapreduce.lib.input.FileInputFormat;ImportOrg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;ImportOrg.apache.hadoop.util.GenericOptionsParser;ImportOrg.apache.hadoop.util.Tool;ImportOrg.apache.hadoop.util.toolrunner;public class WordCount extends configured implements Tool{    /** * * @author root * */public static class tokenizermapper extends Mapper<Object,  Text, text, intwritable>{         Private Finalstatic intwritable one =NewIntwritable (1);PrivateText Word =NewText (); public void Map (Object key, Text value, context context)throwsIOException, interruptedexception {StringTokenizer ITR =NewStringTokenizer (Value.tostring ()); while(Itr.hasmoretokens ())                {Word.set (Itr.nexttoken ());            Context.write (Word, one); }// while}//Map}//Mapper    /** * * @author root * */public static class intsumreducer extends Reducer<Text,intwritable ,Text,intwritable> {        Privateintwritable result =NewIntwritable (); public void reduce (Text key, iterable<intwritable> values,context Context)throwsIOException, interruptedexception {int sum =0; for(intwritableVal: values) {sum + =Val. get (); }//forResult.set (sum);        Context.write (key, result); }//Reduce}//Reducer    /** * * @param args * @return * @throws Exception * *public int run (string[] args)throwsexception{Configuration conf =NewConfiguration (); string[] Otherargs =NewGenericoptionsparser (conf, args). Getremainingargs ();if(Otherargs.length! =2) {System.err.println ("Usage:wordcount <in> <out>"); System.exit (2); }//Job nameJob Job =NewJob (Conf,"Word Count");//ClassJob.setjarbyclass (WordCount.class);//MapperJob.setmapperclass (Tokenizermapper.class);//CombinerJob.setcombinerclass (Intsumreducer.class);//ReducerJob.setreducerclass (Intsumreducer.class);//Output key formatJob.setoutputkeyclass (Text.class);//outout value formatJob.setoutputvalueclass (intwritable.class);//input pathFileinputformat.addinputpath (Job,NewPath (otherargs[0]));//Output pathFileoutputformat.setoutputpath (Job,NewPath (otherargs[1])); Job.waitforcompletion (true);returnJob.issuccessful ()?0:1; }/** * * @param args * @throws Exception */public static void Main (string[] args)throwsException {int res = Toolrunner.run (NewConfiguration (),NewWordCount (), args);    System.exit (RES); }}

[Hadoop] Eclipse-based Hadoop application development environment configuration

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.