Hadoop executes commands multiple times:
After Hadoop Namenode-format, after discovering that Hadoop was started again, the Datanode node did not start properly, and the error code was as follows:
Could only being replicated to 0 nodes, instead of 1, there are many reasons for this error, here are four common workarounds for reference: Make sure Master (namenode), Slaves (Datanode) The firewall has been turned off to ensure DFS space usage Hadoop default hadoop.tmp.dir path is/tmp/hadoop-${user.name}, while some Linux system's/tmp directory file system type is often not supported by Hadoop. Started Namenode and Datanode successively.
$hadoop-daemon.sh Start Namenode
$hadoop-daemon.sh Start Datanode above methods have been tried, but still can not start the Datanode node normally, so continue to try, found that there is a problem to consider when we do file system formatting, A current/version file is saved in the Namenode Data folder (that is, in the configuration file Dfs.name.dir the path to the local system), the Namespaceid is recorded, and the version of the formatted Namenode is identified. If we format namenode frequently, the Current/version file that is saved in Datanode (that is, the path to the local system in the configuration file) is just the Namenode ID that you saved when you first formatted it, Therefore, the ID inconsistency between Datanode and Namenode is caused.
So, I directly data1,data2,datalog1,datalog2,logs a total of 5 folders in the Hadoop directory, so I don't have to think about how to make the Namenodeid in multiple folders consistent, and then execute
Hadoop Namenode-format
Continue after: start-all.sh
After running, use the JPS command to see that the Datanode node finally starts normally.