Tests of other versions are invalid:
I used the configuration to run it successfully!
Eclipse version: eclipse-jee-europa-winter-linux-gtk.tar
Hadoop version: hadoop-0.20.2
Linux: Ubuntu 8
I. Install JDK-6, SSH (a little, Google just)
2. install and configure the hadoop-0.20.2 (a little Google)
Iii. Eclipse plugin Loading
The plugin is under contrib/elicipse-plugin the hadoop installation directory.
Place the plugin in the elcipse installation directory without any modification, and then start elcipse.
4. Run wordcount (Wow !?) Example
1. Open mapreduce perspective
2. Configure the location of mapreduce (note that it must be consistent with the configuration file in conf)
3. Use Terminal format for namenode
4. Use Terminal start-all.sh
5. You can see both the DFS location and step by step to view the directory of the entire HDFS
6. Create a directory under the HDFS user directory (ensure that the directory is not affected by permissions)
7. Upload the test file to this directory.
8. Create a mapreduce Project
9. Import wordcount class (in src/example /...... The following code ignores 10 thousand characters, but you can see it once it is opened)
10. Right-click the class and set the configuration option in run dialog.
Note the following:
A. Install the above configuration (HDFS:/localhost: 9000), the root directory of the path parameter of argument is counted from this directory, that is, the part viewed by the DFS location.
B. You must enter JVM parameters.
-Xmx500m
Based on my current understanding, I do not know whether or not. Please kindly advise.
Because jobtracker needs to enable the new Vm for the new job based on this value, and the default size is not enough, which requires 500 m +
11. Start the task (right-click the class and select "run on hadoop ")
Conclusion:
I think the elcipse plug-in is not helpful for improving development efficiency.
In the end, you still need to view the relevant information through terminal and the page !!!