This article address: http://blog.csdn.net/kongxx/article/details/6892675
After installing the official documentation for Hadoop, run in pseudo distributed mode
Bin/hadoop fs-put conf Input
The following exception occurred
11/10/20 08:18:22 WARN HDFs. Dfsclient:datastreamer exception:org.apache.hadoop.ipc.remoteexception:java.io.ioexception:file/user/fkong/ Input/conf/slaves could only is replicated to 0 nodes, instead of 1 at Org.apache.hadoop.hdfs.server.namenode.FSNamesyste M.getadditionalblock (fsnamesystem.java:1417) at Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock ( namenode.java:596) at Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method) at Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39) at Sun.reflect.DelegatingMethodAccessorImpl.invoke (DELEGATINGMETHODACCESSORIMPL.JAVA:25) at Java.lang.reflect.Method.invoke (method.java:597) at Org.apache.hadoop.ipc.rpc$server.call (rpc.java:523) at Org.apache.hadoop.ipc.server$handler$1.run (server.java:1383) at Org.apache.hadoop.ipc.server$handler$1.run ( server.java:1379 in Java.security.AccessController.doPrivileged (Native method) at Javax.security.auth.Subject.doAs (subject.java:396) at Org.apache.hadooP.security.usergroupinformation.doas (usergroupinformation.java:1059) at Org.apache.hadoop.ipc.server$handler.run (server.java:1377) at the Org.apache.hadoop.ipc.Client.call (client.java:1030) at org.apache.hadoop.ipc.rpc$ Invoker.invoke (rpc.java:224) at the $Proxy 1.addBlock (Unknown Source) at Sun.reflect.NativeMethodAccessorImpl.invoke0 ( Native method) at Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39) at Sun.reflect.DelegatingMethodAccessorImpl.invoke (DELEGATINGMETHODACCESSORIMPL.JAVA:25) at Java.lang.reflect.Method.invoke (method.java:597) at Org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (retryinvocationhandler.java:82) at Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (retryinvocationhandler.java:59) at $Proxy 1.addBlock ( Unknown Source) at Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock (dfsclient.java:3104) at Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream (dfsclient.java:2975) at org. apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2000 (dfsclient.java:2255) at Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run (dfsclient.java:2446) 11/10/20 08:18:22 WARN HDFs . Dfsclient:error Recovery for blocks null bad datanode[0] nodes = = null 11/10/20, 08:18:22 WARN. Dfsclient:could not get block locations. Source file "/user/fkong/input/conf/slaves"-aborting ... put:java.io.ioexception:file/user/fkong/input/conf/slaves Could is replicated to 0 nodes instead of 1 11/10/20 ERROR 08:18:22. Dfsclient:exception closing file/user/fkong/input/conf/slaves:org.apache.hadoop.ipc.remoteexception: Java.io.ioexception:file/user/fkong/input/conf/slaves could only is replicated to 0 nodes, instead of 1 at org.apache.h Adoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:1417) at Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:596) at Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method) at SUN.REflect. Nativemethodaccessorimpl.invoke (nativemethodaccessorimpl.java:39) at Sun.reflect.DelegatingMethodAccessorImpl.invoke (DELEGATINGMETHODACCESSORIMPL.JAVA:25) at Java.lang.reflect.Method.invoke (method.java:597) at Org.apache.hadoop.ipc.rpc$server.call (rpc.java:523) at Org.apache.hadoop.ipc.server$handler$1.run (server.java:1383) at Org.apache.hadoop.ipc.server$handler$1.run ( server.java:1379 in Java.security.AccessController.doPrivileged (Native method) at Javax.security.auth.Subject.doAs (subject.java:396) at Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1059) at Org.apache.hadoop.ipc.server$handler.run (server.java:1377) Org.apache.hadoop.ipc.RemoteException: Java.io.ioexception:file/user/fkong/input/conf/slaves could only is replicated to 0 nodes, instead of 1 at org.apache.h Adoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:1417) at Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:596) atSun.reflect.NativeMethodAccessorImpl.invoke0 (Native method) at Sun.reflect.NativeMethodAccessorImpl.invoke ( nativemethodaccessorimpl.java:39) at Sun.reflect.DelegatingMethodAccessorImpl.invoke ( DELEGATINGMETHODACCESSORIMPL.JAVA:25) at Java.lang.reflect.Method.invoke (method.java:597) at Org.apache.hadoop.ipc.rpc$server.call (rpc.java:523) at Org.apache.hadoop.ipc.server$handler$1.run (Server.java : 1383) at Org.apache.hadoop.ipc.server$handler$1.run (server.java:1379) at
Java.security.AccessController.doPrivileged (Native method) at Javax.security.auth.Subject.doAs (subject.java:396) At Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1059) at Org.apache.hadoop.ipc.server$handler.run (server.java:1377) at Org.apache.hadoop.ipc.Client.call (client.java:1030 At Org.apache.hadoop.ipc.rpc$invoker.invoke (rpc.java:224) at $Proxy 1.addBlock (Unknown Source) at Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method) at sun.reflect.NativeMethodAccessorImpl.inVoke (nativemethodaccessorimpl.java:39) at Sun.reflect.DelegatingMethodAccessorImpl.invoke ( DELEGATINGMETHODACCESSORIMPL.JAVA:25) at Java.lang.reflect.Method.invoke (method.java:597) at Org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (retryinvocationhandler.java:82) at Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (retryinvocationhandler.java:59) at $Proxy 1.addBlock ( Unknown Source) at Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock (dfsclient.java:3104) at Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream (dfsclient.java:2975) at org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2000 (dfsclient.java:2255) at Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$datastreamer.run (dfsclient.java:2446) Exit 255
Check that the discovery is due to the default Hadoop.tmp.dir path of/tmp/hadoop-${user.name}, while my Linux system's/tmp directory file system type is often not supported by Hadoop. So there's a need to change the hadoop.tmp.dir path to somewhere else, where you specify a subdirectory under the installation path of Hadoop. The specific steps are as follows:
1. Modify the ${hadoop_home}/conf/core-site.xml file, as follows in the revised document:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/ fkong/hadoop-0.20.203.0/hadoop-${user.name}</value>
</property>
<property>
< name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property >
</configuration>
2. Restart the daemon of Hadoop
$ bin/start-all.sh
3. If you run the following command again, there will be no more anomalies.
$ bin/hadoop fs-put conf input
In addition, there are some other circumstances that can lead to this problem, specific reference hadoop:could only is replicated to 0 nodes, instead