Hadoop Learning Note 0004--eclipse Installing the Hadoop plugin

Source: Internet
Author: User
Tags hadoop fs

Hadoop Study Notes 0004 -- Eclipse installation Hadoop Plugins


1 , download hadoop-1.2.1.tar.gz , unzip to Win7 under hadoop-1.2.1 ;

2 , if hadoop-1.2.1 not in Hadoop-eclipse-plugin-1.2.1.jar package, on the internet to download down;

3 , close Eclipse, Copy to eclipse installation directory eclipse-x.x\plugins eclipse

4 , in Eclipse Middle top menu bar window->preferences set the following path


5 , open map/reducelocations window



6. Set Map/reducelocation parameters



Click the "Finish" button to close the window.

7. Click on the left dfslocations->hadoop(location name of theprevious step ), if you can see User , indicating successful installation



Note: If you do not have dfslocations item, you create a new Map/reduceproject Engineering;

8. Testing

(1) Create a directory on HDFS input

Hadoop Fs-mkdir Input
( 2) Copy Local README.txt into HDFS input
Hadoop fs-put/usr/hadoop/readme.txt Input

( 3 ) New WordCount Project

file- > Project , select map/reduce Project , enter the project name WordCount and so on.

Create a new class in the WordCount projectwith the name WordCount , the code is as follows:

Package Com.hadoop.test;import Java.io.ioexception;import Java.util.stringtokenizer;import Org.apache.hadoop.conf.configuration;import Org.apache.hadoop.fs.path;import org.apache.hadoop.io.IntWritable; Import Org.apache.hadoop.io.text;import Org.apache.hadoop.mapreduce.job;import Org.apache.hadoop.mapreduce.Mapper ; Import Org.apache.hadoop.mapreduce.reducer;import Org.apache.hadoop.mapreduce.lib.input.fileinputformat;import Org.apache.hadoop.mapreduce.lib.output.fileoutputformat;import Org.apache.hadoop.util.GenericOptionsParser;  public class WordCount {public static class Tokenizermapper Extendsmapper<object, text, text, intwritable> {private Final static intwritable one = new intwritable (1);p rivate Text word = new text ();p ublic void map (Object key, text value, Context context) throws IOException, interruptedexception {stringtokenizer ITR = new StringTokenizer (value.tostring ()); while (Itr.hasmoretokens ()) {Word.set (Itr.nexttoken ()); Context.write (Word, one);}}} public static Class IntSumreducer Extendsreducer<text, Intwritable, Text, intwritable> {private intwritable result = new intwritable (); public void reduce (Text key, iterable<intwritable> Values,context Context) throws IOException, interruptedexception {int sum = 0;for (intwritable val:values) {sum + = Val.get ();} Result.set (sum); Context.write (key, result);}} public static void Main (string[] args) throws Exception {configuration conf = new Configuration (); string[] Otherargs = new Genericoptionsparser (conf, args). Getremainingargs (); if (otherargs.length! = 2) { System.err.println ("Usage:wordcount <in> <out>"); System.exit (2);} Job Job = new Job (conf, "word count"); Job.setjarbyclass (Wordcount.class); Job.setmapperclass (Tokenizermapper.class); Job.setcombinerclass (Intsumreducer.class); Job.setreducerclass (Intsumreducer.class); Job.setOutputKeyClass ( Text.class); Job.setoutputvalueclass (Intwritable.class); Fileinputformat.addinputpath (Job, New Path (Otherargs[0])); Fileoutputformat.setoutputpath (Job, NEW Path (otherargs[1])); System.exit (Job.waitforcompletion (true)? 0:1);}}

( 4 ) Click Wordcount.java , right-click Run as->runconfigurations , configure the run parameters, that is, the input and output folders Hdfs://192.168.0.134:9000/user/root/inputhdfs://192.168.0.134:9000/user/root/output


Click Run button to run the program.

Expand DFS Locations , as shown, double-click Open part-r00000 View Results


Attached: The following error occurred during the test

Exception in thread "main" java.io.IOException:Failed to set permissions of path: \tmp\hadoop-administrator\mapred\stagi Ng\administrator-519341271\.staging to 0700

Solution:

method One: replace the file Hadoop-core-1.2.1.jar

Download Hadoop-core-1.2.1-modified.jar replacement to the Hadoop-core-1.2.1.jar file under the Hadoop installation directory: Http://download.csdn.net/detail/m_star_ jy_sy/7376283

method Two: Modify Org.apache.hadoop.fs.FileUtil file and recompile

The steps to resolve are as follows:

1.eclipse New in Java Engineering ;

2. Import the Hadoop - related jar packages into the project ;

3. Copy the Src/core/org/apache/hadoop/fs/fileutil.java file into the source code and paste it into Eclipse Works of src directory under ;

4. Locate the following section and comment out The code in the Checkreturnvalue method ;

5: to the output directory of the project to find the class file, there will be two class files, because Fileutil.java There are internal classes ;


6. Add the class file to the corresponding directory in the Hadoop-core-1.2.1.jar, overwriting the original file ;


7. Copy the updated hadoop-core-1.2.1.jar to the Hadoop cluster, overwrite the original file, and restart Hadoop cluster ;

8. Add the updated Hadoop-core-1.2.1.jar to the project ;

9 Run the program successfully ~ !




Hadoop Learning Note 0004--eclipse Installing the Hadoop plugin

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.