Hadoop error "could only is replicated to 0 nodes, instead of 1".

Source: Internet
Author: User
Tags hadoop fs log4j
Hadoop Error "could only is replicated to 0 nodes, instead of 1"root@scutshuxue-desktop:/home/root/hadoop-0.19.2# bin/hadoop fs-put conf input10/07/18 12:31:05 INFO HDFs. Dfsclient:org.apache.hadoop.ipc.remoteexception:java.io.ioexception:file/user/root/input/log4j.properties could Only being replicated to 0 nodes, instead of 1
At Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:1287)
At Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:351)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.ipc.rpc$server.call (rpc.java:481)
At Org.apache.hadoop.ipc.server$handler.run (server.java:894)

At Org.apache.hadoop.ipc.Client.call (client.java:697)
At Org.apache.hadoop.ipc.rpc$invoker.invoke (rpc.java:216)
At $Proxy 0.addBlock (Unknown Source)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (retryinvocationhandler.java:82)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (retryinvocationhandler.java:59)
At $Proxy 0.addBlock (Unknown Source)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock (dfsclient.java:2823)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream (dfsclient.java:2705)
At org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2000 (dfsclient.java:1996)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run (dfsclient.java:2182)

10/07/18 12:31:05 WARN HDFs. Dfsclient:notreplicatedyetexception Sleeping/user/root/input/log4j.properties Retries Left 4
10/07/18 12:31:05 INFO HDFs. Dfsclient:org.apache.hadoop.ipc.remoteexception:java.io.ioexception:file/user/root/input/log4j.propertiescould Only being replicated to 0 nodes, instead of 1
At Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:1287)
At Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:351)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.ipc.rpc$server.call (rpc.java:481)
At Org.apache.hadoop.ipc.server$handler.run (server.java:894)

At Org.apache.hadoop.ipc.Client.call (client.java:697)
At Org.apache.hadoop.ipc.rpc$invoker.invoke (rpc.java:216)
At $Proxy 0.addBlock (Unknown Source)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (retryinvocationhandler.java:82)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (retryinvocationhandler.java:59)
At $Proxy 0.addBlock (Unknown Source)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock (dfsclient.java:2823)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream (dfsclient.java:2705)
At org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2000 (dfsclient.java:1996)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run (dfsclient.java:2182)

10/07/18 12:31:05 WARN HDFs. Dfsclient:notreplicatedyetexception Sleeping/user/root/input/log4j.properties Retries Left 3
10/07/18 12:31:06 INFO HDFs. Dfsclient:org.apache.hadoop.ipc.remoteexception:java.io.ioexception:file/user/root/input/log4j.properties could Only being replicated to 0 nodes, instead of 1
At Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:1287)
At Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:351)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.ipc.rpc$server.call (rpc.java:481)
At Org.apache.hadoop.ipc.server$handler.run (server.java:894)

At Org.apache.hadoop.ipc.Client.call (client.java:697)
At Org.apache.hadoop.ipc.rpc$invoker.invoke (rpc.java:216)
At $Proxy 0.addBlock (Unknown Source)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (retryinvocationhandler.java:82)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (retryinvocationhandler.java:59)
At $Proxy 0.addBlock (Unknown Source)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock (dfsclient.java:2823)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream (dfsclient.java:2705)
At org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2000 (dfsclient.java:1996)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run (dfsclient.java:2182)

10/07/18 12:31:06 WARN HDFs. Dfsclient:notreplicatedyetexception Sleeping/user/root/input/log4j.properties Retries Left 2
10/07/18 12:31:08 INFO HDFs. Dfsclient:org.apache.hadoop.ipc.remoteexception:java.io.ioexception:file/user/root/input/log4j.propertiescould Only being replicated to 0 nodes, instead of 1
At Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:1287)
At Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:351)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.ipc.rpc$server.call (rpc.java:481)
At Org.apache.hadoop.ipc.server$handler.run (server.java:894)

At Org.apache.hadoop.ipc.Client.call (client.java:697)
At Org.apache.hadoop.ipc.rpc$invoker.invoke (rpc.java:216)
At $Proxy 0.addBlock (Unknown Source)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (retryinvocationhandler.java:82)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (retryinvocationhandler.java:59)
At $Proxy 0.addBlock (Unknown Source)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock (dfsclient.java:2823)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream (dfsclient.java:2705)
At org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2000 (dfsclient.java:1996)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run (dfsclient.java:2182)

10/07/18 12:31:08 WARN HDFs. Dfsclient:notreplicatedyetexception Sleeping/user/root/input/log4j.properties Retries Left 1
10/07/18 12:31:11 WARN HDFs. Dfsclient:datastreamer Exception:org.apache.hadoop.ipc.remoteexception:java.io.ioexception:file/user/root/input /log4j.properties could only is replicated to 0 nodes, instead of 1
At Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:1287)
At Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:351)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.ipc.rpc$server.call (rpc.java:481)
At Org.apache.hadoop.ipc.server$handler.run (server.java:894)

At Org.apache.hadoop.ipc.Client.call (client.java:697)
At Org.apache.hadoop.ipc.rpc$invoker.invoke (rpc.java:216)
At $Proxy 0.addBlock (Unknown Source)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (retryinvocationhandler.java:82)
At Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (retryinvocationhandler.java:59)
At $Proxy 0.addBlock (Unknown Source)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock (dfsclient.java:2823)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream (dfsclient.java:2705)
At org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2000 (dfsclient.java:1996)
At Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run (dfsclient.java:2182)

10/07/18 12:31:11 WARN HDFs. Dfsclient:error Recovery for blocks null bad datanode[0] nodes = = NULL
10/07/18 12:31:11 WARN HDFs. Dfsclient:could not get block locations. Source file "/user/root/input/log4j.properties"-aborting ...
Put:java.io.ioexception:file/user/root/input/log4j.properties could only is replicated to 0 nodes, instead of 1

Good long to a piece of error code, hehe. Just encountered this problem then the Internet search the following, nor a very standard solution. Generally speaking, it is caused by inconsistent state.

There is a way, but will lose the existing data, please use caution.

1. Stop the service first

2. Format Namenode

3. Restart all services

4, can carry out normal operation

Here's what I'm up to the solution step

root@scutshuxue-desktop:/home/root/hadoop-0.19.2# bin/stop-all.sh
Stopping Jobtracker
Localhost:stopping Tasktracker
No Namenode to stop
Localhost:no Datanode to stop
Localhost:stopping Secondarynamenode
root@scutshuxue-desktop:/home/root/hadoop-0.19.2# Bin/hadoop Namenode-format
10/07/18 12:46:23 INFO Namenode. Namenode:startup_msg:
/************************************************************
Startup_msg:starting Namenode
Startup_msg:host = scutshuxue-desktop/127.0.1.1
Startup_msg:args = [-format]
Startup_msg:version = 0.19.2
Startup_msg:build = Https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.19-r 789657; Compiled by ' Root ' on Tue June 12:40:50 EDT 2009
************************************************************/
Re-format filesystem In/tmp/hadoop-root/dfs/name? (Y or N) Y
10/07/18 12:46:24 INFO Namenode. Fsnamesystem:fsowner=root,root
10/07/18 12:46:24 INFO Namenode. Fsnamesystem:supergroup=supergroup
10/07/18 12:46:24 INFO Namenode. Fsnamesystem:ispermissionenabled=true
10/07/18 12:46:25 INFO Common. Storage:image file of size saved in 0 seconds.
10/07/18 12:46:25 INFO Common. Storage:storage Directory/tmp/hadoop-root/dfs/name has been successfully formatted.
10/07/18 12:46:25 INFO Namenode. Namenode:shutdown_msg:
/************************************************************
Shutdown_msg:shutting down Namenode at scutshuxue-desktop/127.0.1.1
************************************************************/
root@scutshuxue-desktop:/home/root/hadoop-0.19.2# ls
Bin Docs Lib README.txt
Build.xml Hadoop-0.19.2-ant.jar Libhdfs src
C + + Hadoop-0.19.2-core.jar Librecordio test-txt
CHANGES.txt Hadoop-0.19.2-examples.jar LICENSE.txt WebApps
Conf Hadoop-0.19.2-test.jar logs
Contrib Hadoop-0.19.2-tools.jar NOTICE.txt
root@scutshuxue-desktop:/home/root/hadoop-0.19.2# bin/start-all.sh
Starting Namenode, logging to/home/root/hadoop-0.19.2/bin/. /logs/hadoop-root-namenode-scutshuxue-desktop.out
Localhost:starting Datanode, logging to/home/root/hadoop-0.19.2/bin/. /logs/hadoop-root-datanode-scutshuxue-desktop.out
Localhost:starting Secondarynamenode, logging to/home/root/hadoop-0.19.2/bin/. /logs/hadoop-root-secondarynamenode-scutshuxue-desktop.out
Starting Jobtracker, logging to/home/root/hadoop-0.19.2/bin/. /logs/hadoop-root-jobtracker-scutshuxue-desktop.out
Localhost:starting Tasktracker, logging to/home/root/hadoop-0.19.2/bin/. /logs/hadoop-root-tasktracker-scutshuxue-desktop.out
root@scutshuxue-desktop:/home/root/hadoop-0.19.2# bin/hadoop fs-put conf input
root@scutshuxue-desktop:/home/root/hadoop-0.19.2# Bin/hadoop Dfs-ls
Found 1 Items
Drwxr-xr-x-root supergroup 0 2010-07-18 12:47/user/root/input

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.