My machine environment:
JDK: jdk-1.6.0_43
Linux: ubuntu11.10
Hadoop version: hadoop-1.0.4
Eclipse version: eclipse4.2.2
Hadoop Eclipse plug-in: hadoop-eclipse-plugin-1.0.4.jar
1: an error occurs when connecting to DFS. The prompt is "error: Failure to login ".
The pop-up error prompt box contains"An internal error occurred during: "connecting to DFS hadoop". org/Apache/commons/configuration/Configuration".
This is caused by the lack of jar packages in the hadoop Eclipse plug-in.
Solution:
The archive manager opens the package and finds that there are only two packages, commons-cli-1.2.jar and hadoop-core.jar. Run the following command in the hadoop/lib directory:
- Commons-configuration-1.6.jar,
- Commons-httpclient-3.0.1.jar,
- Commons-lang-2.4.jar,
- Jackson-core-asl-1.0.1.jar
- Jackson-mapper-asl-1.0.1.jar
A total of five packages are copied to the lib directory of the hadoop-eclipse-plugin-0.20.203.0.jar, such:
Then, modifyManifest. MF, Change classpath to the following content:
Bundle-classpath: classes/, lib/hadoop-core.jar, lib/commons-cli-1.2.jar, lib/commons-httpclient-3.0.1.jar, lib/jackson-core-asl-1.0.1.jar, lib/jackson-mapper-asl-1.0.1.jar, lib/commons-configuration-1.6.jar, lib/commons-lang-2.4.jar
This completes the modification to the hadoop-eclipse-plugin-0.20.203.0.jar.
Finally, copy the hadoop-eclipse-plugin-0.20.203.0.jar to the eclipse plugins directory.
Reference:Http://www.cnblogs.com/xia520pi/archive/2012/05/20/2510723.html
2: Permission denied occurs when you run the hadoop program.
The error message is:Exception in thread "Main" Java. Io. filenotfoundexception:/B .txt (permission denied)
I encountered a problem when I wrote a demo for uploading local files. I have been prompted that my permissions are insufficient, the reason is that the configuration in "map/reduce location" isNoIt works completely. The purpose is to create a file in the root directory of HDFS, but in fact the program tries to create a file in the local root directory, obviously not. We want eclipse to submit the job to the hadoop cluster, so we can manually add the job running address here .)
Solution:
Add:
Conf. Set ("mapred. Job. Tracker", "HDFS: // master: 9001 ");
Conf. Set ("fs. Default. Name", "HDFS :/// master: 9000 ");
Note: If the problem persists, try to add other configuration information through Conf. Set ().
Reference:Http://www.cnblogs.com/xia520pi/archive/2012/05/20/2510723.html