Hadoop datanode cannot be started
From: http://book.51cto.com/art/201110/298602.htm
If you encounter problems during the installation, or you cannot run hadoop after the installation is complete, we recommend that you carefully check the log information, and hadoop records the detailed log information, log files are saved in the logs folder.
Hadoop stores log files for analysis, whether it is started or every job in mapreduce that will be frequently used in the future, as well as related information such as HDFS.
For example:
The namespaceid of namenode and datanode are inconsistent. This error is encountered by many people during installation. The log information is:
Java. Io. ioexception: incompatible namespaceids in/root/tmp/dfs/data:
Namenode namespaceid = 1307672299; datanode namespaceid = 389959598
If HDFS has not been started, the reader can query the logs and analyze the logs. The above prompt shows that the namespaceid of namenode and datanode are inconsistent.
This problem is generally caused by formatting namenode twice or more. There are two solutions to this problem. The first method is to delete all data of datanode; the second method is to modify the namespaceid of each datanode (in the/dfs/data/current/version file) or modify the namespaceid (in the/dfs/name/current/version file) of the namenode to make it consistent.
The following two methods may also be used in practical applications.
1) restart the corrupted datanode or jobtracker. When a single node in a hadoop cluster has a problem, you generally do not have to restart the entire system. Instead, you only need to restart the node and it will automatically connect to the entire cluster.
Enter the following command on the necrotic node:
Bin/Hadoop-daemon.sh start datanode
Bin/Hadoop-daemon.sh start jobtracker
2) dynamically add datanode or tasktracker. This command allows you to dynamically Add a node to a cluster.
Bin/Hadoop-daemon.sh -- config./conf start datanode
Bin/Hadoop-daemon.sh -- config./conf start tasktracker