Use MyEclipse to develop Hadoop programs in Ubuntu

Source: Internet
Author: User

Use MyEclipse to develop Hadoop programs in Ubuntu

The development environment is Ubuntu 11.04, Hadoop 0.20.2, and MyEclipse 9.1.

First install Myeclipse, install Myeclipse in Ubuntu and windows environment installation method is the same, download the myeclipse-9.1-offline-installer-linux.run and then double click to run OK.

Next, install the Myeclipse Hadoop plug-in. Find the hadoop eclipse plug-in the hadoop installation path. The 0.20.2 path is in the hadoop installation path/contrib/eclipse-plugin, copy the plug-in to the dropins file in the installation path of myeclipse.

Then restart myeclipse. myeclipse will automatically prompt you to find the new plug-in.

After the plug-in is successfully installed, open Window --> Preferens and you will find the HadoopMap/Reduce option. In this option, you need to configure Hadoop installation directory. After the configuration is complete, exit.

After that, the mapreduce view is called up. The operation flow is MyEclipse-> Window-> OpenPerspective-> other-> check show all-> to see a small image Map/Reduce.

Now that you have installed the hadoop plug-in successfully, configure the hadoop plug-in to connect it to your hadoop platform.

Create a new Hadoop Location in Map/Reduce Locations. In this View, right-click New Hadoop Location. In the pop-up dialog box, you need to configure the Location name, such as myHadoop, Map/Reduce Master, and DFS Master. The Host and Port here are the addresses and ports you configured in the mapred-site.xml and core-site.xml respectively.

Exit after configuration. Click DFSLocations --> myHadoop. If the folder is displayed, the configuration is correct. If "no connection" is displayed, check your configuration. Successfully configured

After completing the above series of operations, we can start the development. First, let's start with a hello world trainer.

Create a project.

File --> New --> Other --> Map/Reduce Project

Project names can be retrieved as needed, such as hadoop-helloWorld.

Copy the hadoop installation directory/src/example/org/apache/hadoop/example/WordCount. java to the project you just created.

Upload the simulated data folder.

To run the program, we need an Input Folder and an output folder. Output Folder, which is automatically generated after the program runs successfully. We need to input a folder for the program.

1. Create the input folder in the current directory (such as the hadoop installation directory), and create two files file01 and file02 under the folder. The content of these two files is as follows:

File01

HelloWorld Bye World

File02

HelloHadoop Goodbye Hadoop

2. Upload the folder input to the distributed file system.

In the Hadoop daemon terminal that has started cd to the hadoop installation directory, run the following command:

Bin/hadoopfs-put input input01

This command uploads the input Folder to the hadoop file system and adds an input01 folder to the system. You can run the following command to view the folder:

Bin/hadoopfs-ls

Run the project.

1. In the newly created hadoop-test project, click WordCount. java, right-click --> Run As --> Run deployments

2. In the pop-up Run deployments dialog box, click Java Application, right-click --> New, and a New application named WordCount will be created.

3. Configure the running parameters, click Arguments, and enter "the Input Folder you want to pass to the Program and the folder you want the Program to save the computing result" in Program arguments, for example:

Hdfs: // localhost: 9000/user/xx/input01hdfs: // localhost: 9000/user/xx/output01

Here input01 is the folder you just uploaded. You can enter the folder address as needed.

4. Click Run to Run the program.

Click Run to Run the program. After a period of time, the running is completed. After the running is completed, Run the following command on the terminal:

Bin/hadoopfs-ls

Check whether the folder output01 is generated.

Run the following command to view the generated file content:

Bin/hadoopfs-cat output01 /*

If the following figure is displayed, congratulations! You have successfully run the first MapReduce program in eclipse.

Bye1

Goodbye 1

Hadoop2

Hello2

World2

-------------------------------------- Split line --------------------------------------

New Features of Hadoop2.5.2

Install and configure Hadoop2.2.0 on CentOS

Build a Hadoop environment on Ubuntu 13.04

Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1

Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)

Configuration of Hadoop environment in Ubuntu

Detailed tutorial on creating a Hadoop environment for standalone Edition

Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.