1. start Hadoop to enter root permissions, go to Hadoop installation directory $ HADOOP_HOME execute Bin/start-all.shJps to view hadoop process 2. start eclipse to enter the eclipse installation directory and run eclipse with the root permission. /eclipse & amp; run in the background for other operations. 3. Install the Hadoop plug-in Window-& gt; pref in Eclipse
1. Start Hadoop
Go to the root permission and go to the Hadoop installation directory $ HADOOP_HOME.
Run Bin/start-all.sh
View hadoop processes in Jps
2. Start eclipse
Go to the eclipse installation directory and run eclipse with the root permission
./Eclipse & background run for other operations.
3. Install Hadoop plug-in Eclipse
Window-> preference-> HadoopMapReduce: Set the Hadoop installation directory.
// Usr/programFiles/hadoop-1.0.1
There is no hadoop plug-in Eclipse,
Install eclipse hadoop plug-in by http://www.linuxidc.com/Linux/2013-08/88957p2.htm
4. Configure Map/Reduce Locations
Windows-> Show View-> Map/Reduce Locations open Map/ReduceLocations
Right-click and choose New Hadoop Location.
Fill in the address and port configured in the mapred-site.xml, core-site.xml, as shown below:
5. Create a project
File --> New --> Other --> Map/Reduce Project. The Project name can be used as WordCount_root.
Copy the hadoop installation directory/src/example/org/apache/hadoop/examples/WordCount. java to WordCount, and change WordCount. java's first package to mypackage.
6. Create a folder in the hadoop installation directory:
Create test_wordCount_0103 under/usr/programFiles/hadoop-1.0.1
In the test_wordCount_0103 folder, create file0 and file1 files and write some words respectively.
Create the directory input: bin/hadoop fs-mkdir input in the HDFS Distributed File System
7. Copy data from the Linux File System to the HDFS Distributed File System
Bin/hadoop fs-put/usr/programFiles/hadoop-1.0.1/test_wordCount_0103 input
8. Run
Right-click the project and choose Run As> Run deployments.
Click Java Application, right-click --> New, and a New application named WordCount will be created.
Configure the running parameters. Click Arguments. In Program arguments, enter the Input Folder that you want to pass to the Program and the folder that you want the Program to save the computing result, as shown in.Note that the output here must be a non-existent file and an error will be reported if it exists!
Click Run to Run the program.
9. view results
Bin/hadoop fs-ls output
Bin/hadoop fs-cat output/part-r-00000 or direct output /*
The input content is:
Reading:
Build a Hadoop environment http://www.linuxidc.com/Linux/2013-06/86106.htm on Ubuntu 13.04
Ubuntu 12.10 + Hadoop 1.2.1 cluster configuration http://www.linuxidc.com/Linux/2013-09/90600.htm
Build a Hadoop environment on Ubuntu (standalone mode + pseudo distribution mode) http://www.linuxidc.com/Linux/2013-01/77681.htm
Configuration http://www.linuxidc.com/Linux/2012-11/74539.htm for Hadoop environment in Ubuntu
Single-host version to build a Hadoop environment graphic tutorial detailed http://www.linuxidc.com/Linux/2012-02/53927.htm
Build Hadoop environment (build with virtual machine virtual two Ubuntu systems in Winodws environment) http://www.linuxidc.com/Linux/2011-12/48894.htm
For more information about Hadoop, see Hadoop topic page http://www.linuxidc.com/topicnews.aspx? Tid = 13