Hadoop2.4.0 the Eclipse platform in the building

Source: Internet
Author: User
Tags log4j

I. Building ECLIPSE platform under Hadoop2.4.0 environment

1. Install Eclipse

for Hadoop clusters, we installed Eclipse on the master node, first downloading the Eclipse installation package (for example: eclipse-jee-luna-sr1-linux-gtk.tar.gz) and then using TAR-ZXVF command to unzip and move the extracted directory to the/usr/local path, and then start eclipse:

Download URL: http://www.eclipse.org/downloads/?osType=linux&release=undefined

2. Installing the hadoop plugin on eclipse

since we are using thehadoop2.xversion of, so plugin we also want to download the corresponding version (:Http://pan.baidu.com/s/1mgiHFok). Download theZipThe file contains the source code, we use the compiledJarcan be. After decompression,Releasein the directoryHadoop.eclipse-kepler-plugin-2.2.0.jaris a compiled plugin. We will be theJarPackage moves to/usr/local/eclipse/pluginsdirectory, and then restart theEclipse:

when from the left "Project Explorer" found below "Dfslocations" , indicating Eclipse have already identified what you just put in Hadoopeclipse plug-in .

3. Configure Hadoop installation directory  

Select "window" under the menu "Preference" (for example), and then pop up a form, on the left side of the form, with a list of options that are more "hadoopmap/reduce" options, click this option to set the Hadoop the installation directory (for example, my Hadoop The directory is: /usr/hadoop ). Results such as:

4. Configure map/reduce Locations

Open Window-open Perspective-other , pops up a form from which to select "Map/reduce" option to switch:

When you click the OK button, the bottom right corner of the Eclipse software is alive, and there are signs:


Click on the lower right corner of the logo:

Select the "Small Elephant" icon to open the Hadoop location configuration window and make the first off setting:

Click "Finish" after that, you will find Eclipse under the software "Map/reduce Locations" There is a message that we have just established "Map/reducelocation".

View HDFS file system, and try to create folders and upload files. When we have the input and output directories in the /usr/hadoop directory :

Now click Eclipse on the left side of the software "DFS Locations" below: We also saw the input and the Output directory,


that concludes ourHadoop EclipseThe development environment is already configured.

Second,Eclipse Debugging and running mapreduce program

use Eclipse to run the mapreduce program to facilitate the authoring and debugging of the MapReduce program. Let 's take the example of the mapreduces program wordcout with the test Hadoop (Storing the path /usr/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-example-2.4.0.jar ) to be described.


1. Create a Mapreduce Project

from "File" menu, select "Other" , find "map/reduceproject" , and then select it:

2. Create the WordCount class:

Select"Wordcountproject"Project, right-click the pop-up menu, then select"New", then select"Class", then fill in the following information:

and then in WordCount class is written in the Mapreduce program .

3. running the WordCount program

( 1 ) in HDFS Create a directory on INPUT1, and upload files:


Note: The contents of the file are: File1.txt:hello Word file2.txt: Hello Hadoop"

( 2 ) To configure the run information:

Click Wordcount.java , right-click Run As->run Configurations , configure the run parameters, that is, the input and output directories:

Hdfs://master:9000/user/hadoop/input1

Hdfs://master:9000/user/hadoop/output1


If the Run button is clicked, the console appears:


then we need to be in the project's src created under directory log4j.properties file, and then configure it with the following configuration:


Note: log4j the configuration information for the file is described in http://blog.csdn.net/newhappy2008/article/details/2499250

4. Results View:

① to View by command:


② view in DFS Locations :

after running Wordcount Program, Refresh Hadoop directory, appearing OUTPUT1 directory, open its directory under the part-r-00000 files, you can run the results :










Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Setup of the Eclipse platform in Hadoop2.4.0

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.