Hadoop Eclipse plug-in generation

Source: Internet
Author: User
Tags echo message

Hadoop Eclipse plug-in generation

Did a year of Hadoop development. has not automatically generated Eclipse plug-ins, has been in the Internet to download other people's use, there is time today, this part of the regret to fill back, their own generation, nonsense not to say, began.

This article focuses on the generation of Eclipse plug-ins, the configuration process, the liberation of common errors, and the use of the Eclipse plugin to test the WordCount demo sample.

I. Environmental description

The Hadoop Eclipse plug-in in this column is generated through Eclipse (no command generation is due to some of the problems that occur during the process of generating plug-ins with commands. Not easy to view and change, with eclipse very intuitive, convenient error correction, the Hadoop version number is 1.2.1 (stable version). The operating system is CentOS. The IDE is MyEclipse 2013 (same with Eclipse.) No difference).

The following hadoop_home represents the installation folder for Hadoop.


Second, plug-in generation

1. Import the $hadoop_home/src/contrib/eclipse-plugin into the Eclipseproject.

This example imports the/opt/hadoop-1.2.1/src/contrib/eclipse-plugin folder into Eclipse.


If the project name is "Mapreducetools" after the completion of the join. For example, with:



2. Add Hadoop-core-1.2.1.jar to environment variables

Right-click the project-->build path-->configure BuildPath to remove the existing Hadoop-core jar package (the default added package. is not already in classpath). Then add the $hadoop_home/hadoop-core-1.2.1.jar to Classpath.



3. Change the configuration file:

(1). Change Build.properties:

Increase the installation folder and Hadoop version of Eclipse. For example, in the following

Eclipse.home = Your Eclipse installation folder
Version = your Hadoop release

Personal changes to the file within such as the following:

#add by Jack Zhueclipse.home =/opt/myeclipse-2013version = 1.2.1#add by Jack Zhuoutput. = Bin/bin.includes = meta-inf/,plugin.xml,resources/,classes/,classes/,lib/               plugin.xml,               resources/,               classes/,               classes/,               lib/

(2). Change Build.xml:

The document needs to be changed in three places. For example, the following. The changes are in the middle of the two <!--add by zhu--> Tags:

<project default= "jar" name= "Eclipse-plugin" > <import file= ". /build-contrib.xml "/> <path id=" Eclipse-sdk-jars "> <fileset dir=" ${eclipse.home}/plugins/"> <in Clude name= "Org.eclipse.ui*.jar"/> <include name= "Org.eclipse.jdt*.jar"/> <include name= "Org.eclipse       . Core*.jar "/> <include name=" Org.eclipse.equinox*.jar "/> <include name=" Org.eclipse.debug*.jar "/> <include name= "Org.eclipse.osgi*.jar"/> <include name= "Org.eclipse.swt*.jar"/> <include nam E= "Org.eclipse.jface*.jar"/> <include name= "Org.eclipse.team.cvs.ssh2*.jar"/> <include name= "COM.JCR Aft.jsch*.jar "/> </fileset> </path><!--Add by Zhu (1)--><path id=" Hadoop-lib-jars ">< Fileset dir= "${hadoop.root}/" ><include name= "Hadoop-*.jar"/></fileset></path><!--Add by zhu--> <!--Override classpath to include Eclipse SDK jars---<path id= "CLasspath "> <pathelement location=" ${build.classes} "/> <pathelement location=" ${hadoop.root}/build/   Classes "/> <path refid=" Eclipse-sdk-jars "/> <!--Add by Zhu (2)--<path refid=" Hadoop-lib-jars "/> <!--add by zhu--> </path> <!--Skip Building if Eclipse.home is unset.    --<target name= "Check-contrib" unless= "Eclipse.home" > <property name= "skip.contrib" value= "yes"/> <echo message= "Eclipse.home unset:skipping Eclipse plugin"/> </target> <target name= "Compile" depends= " Init, Ivy-retrieve-common "unless=" Skip.contrib "> <echo message=" contrib: ${name} "/> <javac encoding = "${build.encoding}" srcdir= "${src.dir}" includes= "**/*.java" destdir= "${build.classes}" debug= "${javac.de Bug} "deprecation=" ${javac.deprecation} "> <classpath refid=" classpath "/> </javac> &LT;/TARGET&G  T <!--Override jar target to specify manifest--<Target Name= "jar" depends= "compile" unless= "Skip.contrib" > <mkdir dir= "${build.dir}/lib"/> <!--add by Zhu ( 3)--<copy file= "${hadoop.root}/hadoop-core-${version}.jar" tofile= "${build.dir}/lib/hadoop-core.jar" Verbose= "true"/> <copy file= "${hadoop.root}/lib/commons-cli-1.2.jar" todir= "${build.dir}/lib" verbose= "true" /> <copy file= "${hadoop.root}/lib/commons-lang-2.4.jar" todir= "${build.dir}/lib" verbose= "true"/> <copy File= "${hadoop.root}/lib/commons-configuration-1.6.jar" todir= "${build.dir}/lib" verbose= "true"/> <copy file = "${hadoop.root}/lib/jackson-mapper-asl-1.8.8.jar" todir= "${build.dir}/lib" verbose= "true"/> <copy file= "${ Hadoop.root}/lib/jackson-core-asl-1.8.8.jar "todir=" ${build.dir}/lib "verbose=" true "/> <copy file=" ${    Hadoop.root}/lib/commons-httpclient-3.0.1.jar "todir=" ${build.dir}/lib "verbose=" true "/> <!--add by zhu--> <jar jarfile= "${build.dir}/hadoop-${name}-${version}.jar" ManifEst= "${root}/meta-inf/manifest. MF "> <fileset dir=" ${build.dir} "includes=" classes/lib/"/> <fileset dir=" ${root} "includes=" Resourc Es/plugin.xml "/> </jar> </target></project>
Note: The above file contents can be copied directly in the past, no matter what the difference.

(3). Change Meta-inf/manifest. MF File:

Add the jar package added in the third step of the above step (2) in the file: lib/hadoop-core.jar,lib/commons-configuration-1.6.jar,lib/ commons-httpclient-3.0.1.jar,lib/commons-lang-2.4.jar,lib/jackson-core-asl-1.8.8.jar,lib/ Jackson-mapper-asl-1.8.8.jar

The contents of this document include the following:

manifest-version:1.0bundle-manifestversion:2bundle-name:mapreduce Tools for Eclipsebundle-symbolicname:org.apache.hadoop.eclipse;singleton:=truebundle-version:0.18bundle-activator: Org.apache.hadoop.eclipse.activatorbundle-localization:pluginrequire-bundle:org.eclipse.ui, Org.eclipse.core.runtime, Org.eclipse.jdt.launching, Org.eclipse.debug.core, ORG.ECLIPSE.JDT, Org.eclipse.jdt.core , Org.eclipse.core.resources, Org.eclipse.ui.ide, Org.eclipse.jdt.ui, Org.eclipse.debug.ui, Org.eclipse.jdt.debug.ui, Org.eclipse.core.expressions, Org.eclipse.ui.cheatsheets, Org.eclipse.ui.console, Org.eclipse.ui.navigator, Org.eclipse.core.filesystem, Org.apache.commons.loggingeclipse-lazystart: truebundle-classpath:classes/, lib/hadoop-core.jar,lib/commons-configuration-1.6.jar,lib/ commons-httpclient-3.0.1.jar,lib/commons-lang-2.4.jar,lib/jackson-core-asl-1.8.8.jar,lib/ Jackson-mapper-asl-1.8.8.jarbundle-vendor:apache Hadoop 
Note: The above file contents can be copied directly in the past use. No matter what the difference.



4. Run ant in Build.xml to compile and package:

On this page, right-click the-->run as Ant Build


Assuming everything goes well. After the ant compilation is packaged, the Hadoop-eclipse-plugin-1.2.1.jar will be generated under the $hadoop_home/contrib/eclipse-plugin folder



5. Place the newly generated jar package in the Dropins folder under Eclipse's installation folder, and then restart Eclipse


Third, Eclipse plug-in configuration

Make sure you put the Eclipse plugin into the plug-in package under Eclipse and restart Eclipse.

1. Configure the Eclipse plugin

After entering Eclipse. Follow these steps to set up: In the menu bar click "Window" → "Show View" → "Other ..." in the dialog box click "MapReduce Tools" → "map/reduce Locations",
Will pop up for example in the dialog box that you see, fill in the content as prompted by the picture.


After the above steps. Back to the main interface. You can view the contents of a distributed File system in the Project Explore view, such as showing a sample of what you see. Description The Eclipse plugin was installed successfully.




Iv. Common Error Resolution

1. Call tolocalhost/127.0.0.1:9000 failed on local exception:java.io.EOFException

This is inconsistent with the Hadoop Eclipse plug-in and the Hadoop cluster version number, or there is a problem with the plugin, and the workaround is to build an Eclipse plugin yourself, following the steps in this article.

2. Java.net.NoRouteToHostException:No route to HoSt

This is because the port (9000) rejects the connection, as shown in the workaround:

http://blog.csdn.net/zhu_xun/article/details/41115389


Five, new Map/reduceproject

Create a new MapReduce project. In the menu bar, click "New" → "Other ..." → "MapReduce Project". Will pop up for example in the dialog box that you see. In this dialog box, fill in the project name with
Install the Hadoop installation folder




Vi. test execution of MapReduce operations

1. Prepare the data. For example, as seen in the. Create a folder/test/input on HDFS and upload several text files to the folder.

Prepare the MapReduce job. You can copy Wordcount.java from the Src\examples\org\apache\hadoop\examples folder in Hadoop source directly to the new Mapreducejobs project.

Configure the input/output path. See below, right-click in the Wordcount.java. Click "Run as" → "Run configurations ..." in the Popup shortcut menu, and you will see the dialog box shown in Figure 1-17.

Double
Click the "Java Applications" option, enter the input/output path of the job in the New dialog box (separated by a space), and clicking the "Apply" button to save

Executes the job. When you right-click in the Wordcount.java, in the shortcut menu that pops up, clicking "Run as" → "Run on Hadoop" appears, for example, the dialog box that you see. When prompted by the figure, click the "Finish" button and the job starts.



If you don't want to build it yourself, you can download the Hadoop eclipse plugin that I generated :

http://download.csdn.net/detail/u012875880/8154149


Hadoop Eclipse plug-in generation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.