When Hadoop was started today, it was discovered that Datanode could not boot, and the following errors were found in the View log:
Java.io.ioexception:file/opt/hadoop/tmp/mapred/system/jobtracker.info could only is replicated to 0 nodes, instead o F 1
at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:1271)
at Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:422)
at SUN.REFLECT.NATIVEMETHODACCESSORIMPL.INVOKE0 (Native Method)
at Sun.reflect.NativeMethodAccessorImpl.invoke ( nativemethodaccessorimpl.java:57)
at Sun.reflect.DelegatingMethodAccessorImpl.invoke ( delegatingmethodaccessorimpl.java:43)
at Java.lang.reflect.Method.invoke (method.java:606)
at Org.apache.hadoop.ipc.rpc$server.call (rpc.java:508)
at Org.apache.hadoop.ipc.server$handler$1.run ( server.java:959)
at Org.apache.hadoop.ipc.server$handler$1.run (server.java:955)
at Java.security.AccessController.doPrivileged (Native Method)
at Javax.security.auth.Subject.doAs (Subject.java : 415)
at Org.apache.hadoop.ipc.server$handler.run (server.java:953)
At Org.apache.hadoop.ipc.Client.call (client.java:740)
At Org.apache.hadoop.ipc.rpc$invoker.invoke (rpc.java:220)
At Com.sun.proxy. $Proxy 2.addBlock (Unknown Source)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:57)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43)
At Java.lang.reflect.Method.invoke (method.java:606)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (retryinvocationhandler.java:82)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (retryinvocationhandler.java:59)
At Com.sun.proxy. $Proxy 2.addBlock (Unknown Source)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock (dfsclient.java:2937)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream (dfsclient.java:2819)
At org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2000 (dfsclient.java:2102)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run (dfsclient.java:2288)
Discover the reason: Originally the day before yesterday formatted HDFS, each time the format (namenode format) will recreate a namenodeid, and the Dfs.data.dir parameter configuration of the directory contains the last format created by the ID, The ID in the directory configured with the Dfs.name.dir parameter is inconsistent. Namenode format empties the data under Namenode, but does not empty the data under Datanode, causing the startup to fail.
Workaround: I am recreating the Dfs.data.dir specified folder and then modifying it into that folder, as the Dfs.data.dir parameter in the original Hdfs-site.xml is/opt/hadoop/data, I am in/opt/ The Data1 folder is created under Hadoop, and the Dfs.data.dir parameter is changed to:/opt/hadoop/data1, finally reformatting HDFs (Hadoop namenode-format), problem solving.
Troubleshooting Hadoop Startup error: File/opt/hadoop/tmp/mapred/system/jobtracker.info could only being replicated to 0 nodes, instead of 1