Reprinted famous articles from: http://blog.csdn.net/lsttoy/article/details/52400193
Recently, the job of Hadoop was implemented and three questions were found.
Basic condition: Both name server and node server are normal. WebUI shows are OK, all live.
one of the execution phenomena : Always job running, no response.
16/09/01 09:32:29 INFO MapReduce. Job:running job:job_1472644198158_0001
execution Phenomena two: Code as follows, always try to execute, report failed.
16/09/01 09:32:29 INFO MapReduce. Job:running job:job_1472644198158_0001
16/09/01 09:32:46 INFO MapReduce. Job:job job_1472644198158_0001 running in Uber Mode:false
16/09/01 09:32:46 INFO MapReduce. Job:map 0% Reduce 0%
16/09/01 09:33:08 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000000_0, status:failed
16/09/01 09:33:08 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000001_0, status:failed
16/09/01 09:33:25 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000001_1, status:failed
16/09/01 09:33:29 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000000_1, status:failed
16/09/01 09:33:41 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000001_2, status:failed
16/09/01 09:33:45 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000000_2, status:failed
16/09/01 09:33:58 INFO MapReduce. Job:map 100% Reduce 100%
16/09/01 09:33:58 INFO MapReduce. Job:job job_1472644198158_0001 failed with state failed due To:task failed task_1472644198158_0001_m_000001
Job failed as tasks failed. Failedmaps:1 failedreduces:0
16/09/01 09:33:58 INFO MapReduce. Job:counters:17
Job Counters
Failed Map tasks=7
Killed map Tasks=1
Killed reduce Tasks=1
Launched Map tasks=8
Other local map tasks=6
Data-local Map tasks=2
Total time spent by all maps in occupied slots (ms) =123536
Total time spent by all reduces in occupied slots (ms) =0
Total time spent by all map tasks (ms) =123536
Total time spent by all reduce tasks (ms) =0
Total Vcore-milliseconds taken by all map tasks=123536
Total Vcore-milliseconds taken by all reduce tasks=0
Total Megabyte-milliseconds taken by all map tasks=126500864
Total Megabyte-milliseconds taken by all reduce tasks=0
Map-reduce Framework
CPU Time Spent (ms) =0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
[Root@slave1 mapreduce]# Hadoop jar Hadoop-mapreduce-examples-2.7.3.jar wordcount/input/output
16/09/01 10:16:30 INFO Client. Rmproxy:connecting to ResourceManager at/114.xxx.xxx.xxx:8032
Org.apache.hadoop.mapred.FileAlreadyExistsException:Output directory Hdfs://master:9000/output already exists
At Org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs (fileoutputformat.java:146)
At Org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs (jobsubmitter.java:266)
At Org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal (jobsubmitter.java:139)
At Org.apache.hadoop.mapreduce.Job 10.run (job.java:1290) atorg.apache.hadoop.mapreduce.Job 10.run (job.java:1290) at Org.apache.hadoop.mapreduce.Job10.run (job.java:1287)
At java.security.AccessController.doPrivileged (Native Method)
At Javax.security.auth.Subject.doAs (subject.java:422)
At Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1698)
At Org.apache.hadoop.mapreduce.Job.submit (job.java:1287)
At Org.apache.hadoop.mapreduce.Job.waitForCompletion (job.java:1308)
At Org.apache.hadoop.examples.WordCount.main (wordcount.java:87)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:62)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43)
At Java.lang.reflect.Method.invoke (method.java:497)
At Org.apache.hadoop.util.programdriver$programdescription.invoke (programdriver.java:71)
At Org.apache.hadoop.util.ProgramDriver.run (programdriver.java:144)
At Org.apache.hadoop.examples.ExampleDriver.main (exampledriver.java:74)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:62)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43)
At Java.lang.reflect.Method.invoke (method.java:497)
At Org.apache.hadoop.util.RunJar.run (runjar.java:221)
At Org.apache.hadoop.util.RunJar.main (runjar.java:136)
execution Phenomenon Three , child node execution job error
16/09/01 09:32:29 INFO MapReduce. Job:running job:job_1472644198158_0001
16/09/01 09:32:46 INFO MapReduce. Job:job job_1472644198158_0001 running in Uber Mode:false
16/09/01 09:32:46 INFO MapReduce. Job:map 0% Reduce 0%
16/09/01 09:33:08 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000000_0, status:failed
16/09/01 09:33:08 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000001_0, status:failed
16/09/01 09:33:25 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000001_1, status:failed
16/09/01 09:33:29 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000000_1, status:failed
16/09/01 09:33:41 INFO MapReduce. Job:task Id:attempt_1472644198158_0001_m_000001_2, status:failed
Problem Resolution:
for problem one, always in the running state, the solution is to modify the Yarn-site.xml. Provides a minimum of 1024MB and a maximum of 2048MB to start the calculation.
for issue Two, there is a bug like the above code problem if prompted for the generated code. The possible cause is that you have output in the resulting folder such as the output/folder already exists.
Perform Hadoop fs-rm-r/output. (Hadoop fs-rmr/output) in parentheses for the old version of the command.
for problem three, if jobfaild occurs, query the log in node, and you will find that the possible problem is that the information communication fails in node nodes and the name node.
The Hosts file needs to be modified. Specific modification method is relatively simple, please Baidu:)