Use of the Eclipse plugin for Hadoop2.6.2

Source: Internet
Author: User

Welcome to reprint, and please specify the source, in the article page obvious location to the original connection.

This article links:

First give the Eclipse plugin: http://download.csdn.net/download/zdfjf/9421244

    • 1. Plug-in installation

After the plugin is downloaded, put it under the Plugins folder in the Eclipse installation directory, and then restart Eclipse, you will find that the Project Explorer window has more DFS locations, corresponding to the files stored in HDFs, Now the directory structure is not shown in the inside, do not worry, after the second configuration, the directory structure will appear.

It occurred to me that there was an article in the blog park that described this part very well, and I felt that I would not write better than him on this part. So I don't waste time, directly refer to the shrimp studio, the original link http://www.cnblogs.com/xia520pi/archive/2012/05/20/2510723.html, you can configure this part to complete, What we have to say is that after the configuration is complete, there are some problems that cause the running program to fail successfully. By constantly debugging, I post the code that I run successfully and the corresponding configuration.

    • 2. Code
1 /**2 * Licensed to the Apache software Foundation (ASF) under one3 * or more contributor license agreements. See the NOTICE file4 * Distributed with the additional information5 * regarding copyright ownership. The ASF licenses this file6 * to you under the Apache License, Version 2.0 (The7 * "License"); Except in compliance8 * with the License. Obtain a copy of the License at9  *Ten  *     http://www.apache.org/licenses/LICENSE-2.0 One  * A * unless required by applicable or agreed to writing, software - * Distributed under the License is distributed on a "as is" BASIS, - * without warranties or CONDITIONS of any KIND, either express or implied. the * See the License for the specific language governing permissions and - * Limitations under the License. -  */ -  PackageOrg.apache.hadoop.examples; +  - Importjava.io.IOException; + ImportJava.util.StringTokenizer; A  at Importorg.apache.hadoop.conf.Configuration; - ImportOrg.apache.hadoop.fs.Path; - Importorg.apache.hadoop.io.IntWritable; - ImportOrg.apache.hadoop.io.Text; - ImportOrg.apache.hadoop.mapreduce.Job; - ImportOrg.apache.hadoop.mapreduce.Mapper; in ImportOrg.apache.hadoop.mapreduce.Reducer; - ImportOrg.apache.hadoop.mapreduce.lib.input.FileInputFormat; to ImportOrg.apache.hadoop.mapreduce.lib.output.FileOutputFormat; + ImportOrg.apache.hadoop.util.GenericOptionsParser; -  the  Public classWordCount { *  $    Public Static classTokenizermapperPanax Notoginseng        extendsMapper<object, text, text, intwritable>{ -      the     Private Final StaticIntwritable one =NewIntwritable (1); +     PrivateText Word =NewText (); A        the      Public voidmap (Object key, Text value, context context +)throwsIOException, interruptedexception { -StringTokenizer ITR =NewStringTokenizer (value.tostring ()); $        while(Itr.hasmoretokens ()) { $ Word.set (Itr.nexttoken ()); - Context.write (Word, one); -       } the     } -   }Wuyi    the    Public Static classIntsumreducer -        extendsReducer<text,intwritable,text,intwritable> { Wu     Privateintwritable result =Newintwritable (); -  About      Public voidReduce (Text key, iterable<intwritable>values, $ Context Context -)throwsIOException, interruptedexception { -       intsum = 0; -        for(intwritable val:values) { ASum + =val.get (); +       } the result.set (sum); - Context.write (key, result); $     } the   } the  the    Public Static voidMain (string[] args)throwsException {  system.setproperty ("Hadoop_user_name", "HADOOP");  -Configuration conf =NewConfiguration ();  conf.set ("Mapreduce.framework.name", "yarn"); Conf.set ("Yarn.resourcemanager.address", "192.168.0.1:8032");    conf.set ("Mapreduce.app-submission.cross-platform", "true");  Aboutstring[] Otherargs =Newgenericoptionsparser (conf, args). Getremainingargs (); the     if(Otherargs.length < 2) { theSystem.err.println ("Usage:wordcount <in> [<in> ...] <out> "); theSystem.exit (2); +     } -Job Job =NewJob (conf, "Word count1"); theJob.setjarbyclass (WordCount.class);BayiJob.setmapperclass (Tokenizermapper.class); theJob.setcombinerclass (Intsumreducer.class); theJob.setreducerclass (Intsumreducer.class); -Job.setoutputkeyclass (Text.class); -Job.setoutputvalueclass (intwritable.class); the      for(inti = 0; i < otherargs.length-1; ++i) { theFileinputformat.addinputpath (Job,NewPath (Otherargs[i])); the     } the Fileoutputformat.setoutputpath (Job, -       NewPath (otherargs[otherargs.length-1])); theSystem.exit (Job.waitforcompletion (true) ? 0:1); the   } the}

Here line 69th, because my Windows user name is Frank, the username on the cluster is Hadoop, so add the config file here and set the Hadoop_user_name to Hadoop. The 71st and 72 lines are because the configuration file does not work, and if you do not add these two lines, it will run locally and not commit to the cluster. Line 73rd because it is cross-platform, windows->linux, so add this sentence.

then, the most important step came, attention, attention, attention, important things to say 3 times.

The plugin will automatically make the project into a jar package and upload it to run. But there is a problem and it is not automatically packaged. So, we're going to put project into a jar, then build path, configure it as the external dependency package for the project, and then right-click Run as--on Hadoop. It will be successful.

PS: This is one of my methods, in the process of configuration, encountered a variety of problems, the cause of the problem is different. So, multi-search, think more, solve problems.

Use of the Eclipse plugin for Hadoop2.6.2

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.