Troubleshooting Hadoop startup error: File/opt/hadoop/tmp/mapred/system/ could only being replicated to 0 nodes, instead of 1

Source: Internet
Author: User

When Hadoop was started today, it was discovered that Datanode could not boot, and the following errors were found in the View log: could only is replicated to 0 nodes, instead o F 1
at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (
at Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (
at Sun.reflect.NativeMethodAccessorImpl.invoke (
at Sun.reflect.DelegatingMethodAccessorImpl.invoke (
at Java.lang.reflect.Method.invoke (
at Org.apache.hadoop.ipc.rpc$ (
at Org.apache.hadoop.ipc.server$handler$ (
at Org.apache.hadoop.ipc.server$handler$ (
at (Native Method)
at ( : 415)
at Org.apache.hadoop.ipc.server$ (

At (
At Org.apache.hadoop.ipc.rpc$invoker.invoke (
At Com.sun.proxy. $Proxy 2.addBlock (Unknown Source)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (
At Java.lang.reflect.Method.invoke (
At (
At (
At Com.sun.proxy. $Proxy 2.addBlock (Unknown Source)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock (
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream (
At org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2000 (
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$ (

Discover the reason: Originally the day before yesterday formatted HDFS, each time the format (namenode format) will recreate a namenodeid, and the parameter configuration of the directory contains the last format created by the ID, The ID in the directory configured with the parameter is inconsistent. Namenode format empties the data under Namenode, but does not empty the data under Datanode, causing the startup to fail.

Workaround: I am recreating the specified folder and then modifying it into that folder, as the parameter in the original Hdfs-site.xml is/opt/hadoop/data, I am in/opt/ The Data1 folder is created under Hadoop, and the parameter is changed to:/opt/hadoop/data1, finally reformatting HDFs (Hadoop namenode-format), problem solving.

Troubleshooting Hadoop Startup error: File/opt/hadoop/tmp/mapred/system/ could only being replicated to 0 nodes, instead of 1

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.