directly on the question: Two sets of clusters were installed for the test in two days:
(1) 32-bit HADOOP1 cluster (5 nodes);
(2) 64-bit HADOOP2 cluster (6 nodes)
Two clusters have encountered such a problem: After Namenode normal start of the Hadoop cluster, the view Datanode is normal to show the existence of the process, but in the Web interface to see the data nodes are all down, or simply not directly datanode. There is a situation, Datanode boot, JPS view is in, but a later to see, found hung up. There is, the storage space display occupies the 100%
In fact, the two cluster problems are the same, because I myself after formatting the Namenode, did not delete the master node above the folder used to save data (that is, the Dfs.name.dir configuration of the file under the path), The meta-information about the file system and the data node that caused the internal namenode to exist do not correspond.
Workaround:
Remove the TMP above each node (probably different from the path and file I set, the Dfs.name.dir path you set in Hdfs-site.xml), then format the cluster, and finally restart the cluster, the problem is done.
Datanode node not found on web50070 port after Datanode startup (capacity Workshop)