The creation of the Hadoop environment in eclipse

Source: Internet
Author: User
Tags static class

Tag: text input void end [1] file show picture new T

Support for establishing a Hadoop environment in Eclipse

1. Need to download and install eclipse

2. Need Hadoop-eclipse-plugin-2.6.0.jar plugin, the ultimate solution of plug-in is https://github.com/winghc/hadoop2x-eclipse-plugin download and compile. can also be used to provide good plug-ins.

3. Copy the compiled Hadoop-eclipse-plugin-2.6.0.jar to the Eclipse plug-in directory (plugins directory),

Restart Eclipse

4. Configure the Hadoop installation directory in eclipse

Hadoop installation directory, Hadoop map/reduce, Windows->preference, specify the installation directory for Hadoop here

Click Apply and click OK to confirm

5. Configure the map reduce view

window, Open perspective->other-> map/reduce Click "OK"

Click "OK", map/reduce Locations, Window----show view--

6. On the "Map/reduce Location" tab, click on the icon < elephant +> or right click on the blank, select "New Hadoop locations ..." and pop up the dialog "new Hadoop ...", Make the appropriate configuration

Set location name to any, host is the IP address or host name of the master node in the Hadoop cluster, where the port of the Mr Master must be mapred-site.xml configuration file consistent with 10020,dfs The port of master must be the same as the Core-site.xml configuration file for 9000,user name root (the user name for installing the Hadoop cluster). then click Finish. The location name you just created (Hadoop) appears in the DFS location directory in Eclipse, and Eclipse is connected to the Hadoop cluster successfully.

7. Open Project Explorers to view the HDFs file system,

8. New Map/reduce Task

The Hadoop service needs to be started first

Map Reduce Project->next, File, New, Project

Fill in the project name

Write The WordCount class:

Package Test;import Java.io.ioexception;import Java.util.stringtokenizer;import Org.apache.hadoop.conf.configuration;import Org.apache.hadoop.fs.path;import org.apache.hadoop.io.IntWritable; Import Org.apache.hadoop.io.text;import Org.apache.hadoop.mapreduce.job;import Org.apache.hadoop.mapreduce.Mapper ; Import Org.apache.hadoop.mapreduce.reducer;import Org.apache.hadoop.mapreduce.lib.input.fileinputformat;import Org.apache.hadoop.mapreduce.lib.output.fileoutputformat;import Org.apache.hadoop.util.GenericOptionsParser; public class WordCount {public static class MyMap extends Mapper<object, text, text, intwritable> {private final STA Tic intwritable one = new intwritable (1);p rivate Text word = new text (); @Overridepublic void map (Object key, Text value, C Ontext context) throws IOException, interruptedexception {stringtokenizer ITR = new StringTokenizer (value.tostring ()); while (Itr.hasmoretokens ()) {Word.set (Itr.nexttoken ()); Context.write (Word, one);}}} public static class Myreduce ExtenDsreducer<text, Intwritable, Text, intwritable> {private intwritable result = new intwritable (); @Overridepublic void reduce (Text key, iterable<intwritable> Values,context Context) throws IOException, interruptedexception {int sum = 0;for (intwritable val:values) {sum + = Val.get ();} Result.set (sum); Context.write (key, result);}} public static void Main (string[] args) throws Exception {configuration conf = new Configuration (); string[] Otherargs = new Genericoptionsparser (conf, args) if (otherargs.length! = 2) {System.err.println ("Usage: WordCount <in> <out> "); System.exit (2);} Job Job = new Job (conf, "word count"); Job.setjarbyclass (Wordcount.class); Job.setmapperclass (Mymap.class); Job.setreducerclass (Myreduce.class); Job.setoutputkeyclass (Text.class); Job.setoutputvalueclass ( Intwritable.class); Fileinputformat.addinputpath (Job, New Path (Otherargs[0])); Fileoutputformat.setoutputpath (Job, New Path (Otherargs[1])); System.exit (Job.waitforcompletion (true)? 0:1);}}

To run the WordCount program:

Right-click Run as, run configurations

Choose Java Applications->wordcount (the class to run)->arguments

Fill in the input/output path in program arguments, click Run

The creation of the Hadoop environment in eclipse

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.