Hadoop DataNode cannot be started to solve the problem
Java. io. IOException: File... Cocould only be replicated to 0 nodes, instead of 1.
Use dfsadmin-report to report no data node, as shown below:
[Hadoop @ namenode hadoop] $ hadoop dfsadmin-report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used % :? %
Under replicated blocks: 0
Blocks with primary upt replicas: 0
Missing blocks: 0
-----------------
Datanodes available: 0 (0 total, 0 dead)
The most common cause is that namenode is formatted multiple times, that is, the namespaceID is inconsistent.
In this case, logs is cleared, and sometimes no datanode logs are generated even after restart.
Solution: Find an inconsistent VERSION and modify the namespaceID.
Or
Delete all files in hdfs/data and reinitialize namenode. In this case, all the data is lost (as shown in the result)
PS: another reason why datanode cannot be started is the permission of the data file.
Install and configure Hadoop2.2.0 on CentOS
Build a Hadoop environment on Ubuntu 13.04
Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1
Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)
Configuration of Hadoop environment in Ubuntu
Detailed tutorial on creating a Hadoop environment for standalone Edition
Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)