In the previous article eclipse has been able to access the HDFs directory, but it is not possible to do mapreduce programming, where a small series of common errors and procedures to summarize, I hope to be helpful to everyone
Error 1: ERROR [main] util. Shell (Shell.java:getWinUtilsPath (303))-Failed to locate the winutils binary in the Hadoop binary path
Java.io.IOException:Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
Workaround:
Eg: the directory i unzipped is D:\hadoop-2.6.0
Add Hadoop_home to the system's environment variable interface and add it to the system variable path
Then choose windows->prefenences Settings in eclipse
Error 2: Unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable
Error 3: Exception in thread "main" JAVA.LANG.UNSATISFIEDLINKERROR:ORG.APACHE.HADOOP.IO.NATIVEIO.NATIVEIO$WINDOWS.ACCESS0 ( Ljava/lang/string;i) Z
The above two issues are missing components, the workaround:
Download Hadoop-common-2.2.0-bin-master
Replace the inside Bin directory with the bin directory inside the local Hadoop
error: 4:
15/08/03 10:15:43 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where APPLICABLE
15/08/03 10:15:43 INFO Client. Rmproxy:connecting to ResourceManager at/0.0.0.0:8032
15/08/03 10:15:55 INFO IPC. Client:retrying Connect to server:0.0.0.0/0.0.0.0:8032. Already tried 0 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1000 MILLISECONDS)
15/08/03 10:15:57 INFO IPC. Client:retrying Connect to server:0.0.0.0/0.0.0.0:8032. Already tried 1 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1000 MILLISECONDS)
15/08/03 10:15:59 INFO IPC. Client:retrying Connect to server:0.0.0.0/0.0.0.0:8032. Already tried 2 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1000 MILLISECONDS)
Workaround:
Change core-site.xml localhost in the cluster configuration file to your master node IP
Then join in the Yarn-site.xml
<property><name>yarn.resourcemanager.address</name><value>127.0.0.1:8032</value></property><property><name>yarn.resourcemanager.scheduler.address</name><value>127.0.0.1:8030</value></property><property><name>yarn.resourcemanager.resource-tracker.address</name><value>127.0.0.1:8031</value></property><property><name>yarn.resourcemanager.hostname</name><value>127.0.0.1</value></property>
note 127.0.0.1 changed to your IP
It is important to note that after the change, the cluster is restarted and the modified file is copied to the SRC directory of the program and refreshed in eclipse
Error 5:
Exit code:1
Exception message:/bin/bash: line No. 0: FG: No task control
Stack trace:exitcodeexception exitcode=1:/bin/bash: Line No. 0: FG: No task control
Workaround Refer to the online tutorial: http://www.aboutyun.com/thread-8498-1-1.html not resolved
The real solution is:
Add the following properties to the client configuration file:
< Property > < name >mapreduce.app-submission.cross-platform</name> < value>true</value> </ Property >
Note: It must be added to the client local profile read by the Hadoop program, which does not work in files such as "Core-site.xml", "mapred-site.xml", etc., added to the client's Hadoop installation path
Recommend a good blog: http://www.aboutyun.com/thread-8311-1-1.html
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Run the MapReduce program under Windows using Eclipse compilation Hadoop2.6.0/ubuntu (ii)