Uploading files using Hadoop HDFs dfs-put XXX
17/12/08 17:00:39 WARN HDFs. Dfsclient:datastreamer Exceptionorg.apache.hadoop.ipc.RemoteException (java.io.IOException): file/user/sanglp/ Hadoop-2.7.4.tar.gz._copying_ could only is replicated to 0 nodes instead of minreplication (=1). There is 0 Datanode (s) running and no node (s) is excluded in this operation. At Org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock (blockmanager.java:1628) at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets (fsnamesystem.java:3121) at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:3045) at Org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock (namenoderpcserver.java:725) at Org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock ( CLIENTNAMENODEPROTOCOLSERVERSIDETRANSLATORPB. java:493) at ORG.APACHE.HADOOP.HDFS.PROTOCOL.P Roto. ClienTnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod (Clientnamenodeprotoco LPROTOS.J Ava) at Org.apache.hadoop.ipc.protobufrpcengine$server$protobufrpcinvoker.call (protobufrpcengine.java:616) A T Org.apache.hadoop.ipc.rpc$server.call (rpc.java:982) at Org.apache.hadoop.ipc.server$handler$1.run (Server.java : 2217) at Org.apache.hadoop.ipc.server$handler$1.run (server.java:2213) at Java.security.AccessController.doP Rivileged (Native Method) at Javax.security.auth.Subject.doAs (subject.java:422) at org.apache.hadoop.security . Usergroupinformation.doas (usergroupinformation.java:1746) at Org.apache.hadoop.ipc.server$handler.run ( server.java:2213) at Org.apache.hadoop.ipc.Client.call (client.java:1476) at org.apache.hadoop.ipc.Client.cal L (client.java:1413) at Org.apache.hadoop.ipc.protobufrpcengine$invoker.invoke (protobufrpcengine.java:229) at Com.sun.proxy. $Proxy 10.addBlock (Unknown Source) at Org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock ( clientnamenodeprotocoltranslatorpb.java:418) at Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:62) at Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43) at Java.lang.reflect.Method.invoke (method.java:498) at Org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (retryinvocationhandler.java:191) at Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (retryinvocationhandler.java:102) at com.sun.proxy.$ Proxy11.addblock (Unknown Source) at Org.apache.hadoop.hdfs.dfsoutputstream$datastreamer.locatefollowingblock ( dfsoutputstream.java:1588) at Org.apache.hadoop.hdfs.dfsoutputstream$datastreamer.nextblockoutputstream ( dfsoutputstream.java:1373) at Org.apache.hadoop.hdfs.dfsoutputstream$datastreameR.run (dfsoutputstream.java:554) Put:file/user/sanglp/hadoop-2.7.4.tar.gz._copying_ could only being replicated to 0 nodes Instead of Minreplication (=1). There is 0 Datanode (s) running and no node (s) is excluded in this operation.
See if Hadoop is working and the process is complete
View disk usage
From here we can see that the space of the trader is empty.
The reason may be that there was a problem with Hadoop formatting
Then the files in the logs and TMP are all deleted, reformatted and unsuccessful, and later see the Clusterid inconsistency problem
Modified the Clusterid in the version file with the in-inconsistent state problem,
Increased the number of hadoop-hdfs.xml in the
<property> <name>dfs.name.dir</name> <value>/soft/hadoop/name</value> # Hadoop's name directory path </property> <property> <name>dfs.data.dir</name> <value>/soft/ Hadoop/data</value> #hadoop的data目录路径 </property>
Reformat and then start it.
It was a whole morning of rolling and tossing 、、、
"Big Data series" Hadoop upload file Error _copying_ could only is replicated to 0 nodes