Compiling a plug-in for Hadoop Eclipse (hadoop1.0)

Source: Internet
Author: User

original articles, reproduced please specify: reprinted from the Engineering School hall 1th

Welcome to my personal blog: www.wuyudong.com, more great stories about cloud computing and big data

In hadoop-1.0, unlike the 0.20.2 version, there is a ready-made Eclipse-plugin source package, but in the Hadoop_home/src/contrib/eclipse-plugin directory placed the Eclipse plugin source code, this article , I want to make a detailed note of how I compiled this source to generate the Eclipse plugin for Hadoop1.0

1. Installation Environment

Operating system: Ubuntu14.4
Software:
Eclipse
Java
Hadoop 1.0

2. Compile step

(1) First download the Ant and Ivy installation package

Unzip the installation package to the specified directory, then place the Ivy-2.2.0.jar package in the Ivy package in the Lib directory of the ant installation directory, and then add the following to the/etc/profile to set the configuration environment:

Export ant_home=/home/wu/opt/apache-ant-1.8.3
Export path= "$ANT _home/bin: $PATH"

(2) The terminal goes to the Hadoop installation directory and executes ant compile, the results are as follows:

........................

Compile
[Echo] Contrib:vaidya
[Javac]/home/wu/opt/hadoop-1.0.1/src/contrib/build-contrib.xml:185:warning: ' Includeantruntime ' is not set, Defaulting to Build.sysclasspath=last; Set to False for repeatable builds
[Javac] compiling source Files To/home/wu/opt/hadoop-1.0.1/build/contrib/vaidya/classes
[Javac] Note:/home/wu/opt/hadoop-1.0.1/src/contrib/vaidya/src/java/org/apache/hadoop/vaidya/statistics/job/ Jobstatistics.java uses unchecked or unsafe operations.
[Javac] Note:recompile with-xlint:unchecked for details.

Compile-ant-tasks:
[Javac]/home/wu/opt/hadoop-1.0.1/build.xml:2170:warning: ' Includeantruntime ' is not set, defaulting to Build.sysclasspath=last; Set to False for repeatable builds
[Javac] Compiling 5 source Files To/home/wu/opt/hadoop-1.0.1/build/ant

Compile

BUILD Successful

Total Time:12 minutes seconds

You can see the compilation is successful! Spend a long time, you can brew pot tea to rest ~ ~

(3) Then locate the terminal to Hadoop_home/src/contrib/eclipse-plugin, and then execute the following command:

ant-declipse.home=/home/wu/opt/eclipse-dversion=1.0.1 jar

Once the compilation is complete, you can find the Eclipse plugin.

3. Installation Steps

(1) Pseudo-distributed configuration process is also very simple, only need to modify a few files, in the code of the Conf folder, you can find the following configuration files, the specific process I will not say, here list my configuration:

Core-site.xml

<configuration><property><name>fs.default.name</name><value>hdfs://localhost : 9000</value></property><property><name>hadoop.tmp.dir</name><value>/home/ Wu/hadoop-0.20.2/tmp</value></property></configuration>

Hdfs-site.xml

<configuration><property><name>dfs.replication</name><value>1</value></ Property></configuration>

Mapred-site.xml

<configuration><property><name>fs.default.name</name><value>hdfs://localhost :9000</value></property><property><name>mapred.job.tracker</name><value> Hdfs://localhost:9001</value></property></configuration>

Go to the Conf folder, modify the configuration file: hadoop-env.sh, open the Java_home comment inside, and configure the address inside correctly

(2) Running Hadoop

Enter the Hadoop directory, first run, need to format the file system, enter the command:

Bin/hadoop Namenode-format

Enter command to start all incoming and outgoing:

bin/start-all.sh

Turning off Hadoop is possible with:

bin/stop-all.sh

Finally verify that Hadoop is installed successfully, open the browser and enter:

http://localhost:50030/(Web page for MapReduce)

http://localhost:50070/(HDFS Web page)

Use the JPS command to see how several Java processes are running, if the following are normal

[Email protected]:~/opt/hadoop-1.0.1$ JPS
4113 Secondarynamenode
4318 Tasktracker
3984 DataNode
3429
3803 NameNode
4187 Jobtracker
4415 Jps

After the system starts up, run a program now:

$mkdir input$cd Input$echo "Hello World" >test1.txt$echo "Hello Hadoop" >TEST2.TXT$CD. $bin/hadoop dfs-put input in$bin/hadoop jar Hadoop-examples-1.0.1.jar wordcount in Out$bin/hadoop dfs-cat out/*

A long list of running processes occurs:

****hdfs://localhost:9000/user/wu/in
15/05/29 10:51:41 INFO input. Fileinputformat:total input paths to Process:2
15/05/29 10:51:42 INFO mapred. Jobclient:running job:job_201505291029_0001
15/05/29 10:51:43 INFO mapred. Jobclient:map 0% reduce 0%
15/05/29 10:52:13 INFO mapred. Jobclient:map 100% reduce 0%
15/05/29 10:52:34 INFO mapred. Jobclient:map 100% reduce 100%
15/05/29 10:52:39 INFO mapred. Jobclient:job complete:job_201505291029_0001
15/05/29 10:52:39 INFO mapred. Jobclient:counters:29
15/05/29 10:52:39 INFO mapred. Jobclient:job Counters
15/05/29 10:52:39 INFO mapred. jobclient:launched Reduce Tasks=1
15/05/29 10:52:39 INFO mapred. jobclient:slots_millis_maps=43724
15/05/29 10:52:39 INFO mapred. Jobclient:total time spent by all reduces waiting after reserving slots (ms) =0
15/05/29 10:52:39 INFO mapred. Jobclient:total time spent by all maps waiting after reserving slots (ms) =0
15/05/29 10:52:39 INFO mapred. jobclient:launched Map tasks=2
15/05/29 10:52:39 INFO mapred. Jobclient:data-local Map tasks=2
15/05/29 10:52:39 INFO mapred. jobclient:slots_millis_reduces=20072
15/05/29 10:52:39 INFO mapred. Jobclient:file Output Format Counters
15/05/29 10:52:39 INFO mapred. Jobclient:bytes written=25
15/05/29 10:52:39 INFO mapred. Jobclient:filesystemcounters
15/05/29 10:52:39 INFO mapred. Jobclient:file_bytes_read=55
15/05/29 10:52:39 INFO mapred. jobclient:hdfs_bytes_read=239
15/05/29 10:52:39 INFO mapred. jobclient:file_bytes_written=64837
15/05/29 10:52:39 INFO mapred. Jobclient:hdfs_bytes_written=25
15/05/29 10:52:39 INFO mapred. Jobclient:file Input Format Counters
15/05/29 10:52:39 INFO mapred. Jobclient:bytes read=25
15/05/29 10:52:39 INFO mapred. Jobclient:map-reduce Framework
15/05/29 10:52:39 INFO mapred. Jobclient:map output materialized bytes=61
15/05/29 10:52:39 INFO mapred. Jobclient:map input records=2
15/05/29 10:52:39 INFO mapred. Jobclient:reduce Shuffle bytes=61
15/05/29 10:52:39 INFO mapred. jobclient:spilled records=8
15/05/29 10:52:39 INFO mapred. Jobclient:map Output bytes=41
15/05/29 10:52:39 INFO mapred. Jobclient:cpu Time Spent (ms) =7330
15/05/29 10:52:39 INFO mapred. Jobclient:total committed heap usage (bytes) =247275520
15/05/29 10:52:39 INFO mapred. Jobclient:combine input records=4
15/05/29 10:52:39 INFO mapred. jobclient:split_raw_bytes=214
15/05/29 10:52:39 INFO mapred. Jobclient:reduce input records=4
15/05/29 10:52:39 INFO mapred. Jobclient:reduce input groups=3
15/05/29 10:52:39 INFO mapred. Jobclient:combine Output records=4
15/05/29 10:52:39 INFO mapred. Jobclient:physical memory (bytes) snapshot=338845696
15/05/29 10:52:39 INFO mapred. Jobclient:reduce Output records=3
15/05/29 10:52:39 INFO mapred. Jobclient:virtual memory (bytes) snapshot=1139433472
15/05/29 10:52:39 INFO mapred. Jobclient:map Output records=4

To view the Out folder:

[Email protected]:~/opt/hadoop-1.0.1$ bin/hadoop Dfs-cat out/*

Hadoop 1
Hello 2
World 1

Compiling a plug-in for Hadoop Eclipse (hadoop1.0)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.