ArticleDirectory
- 1. hadoop requires that the hadoop deployment directory structure on all machines be the same, and there is an account with the same user name
- 2. Format hdsf
- 3. datanode missing
Previously, hadoop was built in standalone mode for operation. Today, a bunch of problems have occurred when we try to build hadoop on two machines. Summary Notes
1. hadoop requires that the hadoop deployment directory structure on all machines be the same, and there is an account with the same user name
The user name on my first machine is hadoop, and the second machine is xuwei, which leads to many problems. If we knew we had to use the same usernames, it would not be so troublesome.
2. Format hdsf
Start the hadoop Service
/Start-all.sh
Before that, we must execute hdsf and use the following command:
./Hadoop namenode-format
If you do not format hdsf, there may be no namenode or datanode errors.
3. datanode missing
Start the hadoop service on the master node, but all processes on the master node are started normally, but the slave node has tasktracker, but there is no datanode. View logs on slave(Saved in the hadoop/logs/hadoop-hadoop-datanode-xuwei-laptop.log file), The following error message is found in logs:
2011-10-10 10:02:28, 447 info Org. apache. hadoop. HDFS. server. datanode. datanode: startup_msg: /*************************************** * ******************** startup_msg: starting datanodestartup_msg: host = xuwei-Laptop/127.0.1.1startup _ MSG: ARGs = [] startup_msg: version = 0.20.1startup _ MSG: Build = http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1-r 810220; compiled by 'oom 'on TUE Sep 1 20:55:56 UTC 2009 **************************** ******************************/10:02:34, 144 error Org. apache. hadoop. HDFS. server. datanode. datanode: Java. io. ioexception: incompatible namespaceids in/home/hadoop/Program/tmp-hadoop/dfs/data: namenode namespaceid = 1911773165; datanode namespaceid = 1366308813 at Org. apache. hadoop. HDFS. server. datanode. datastorage. dotransition (datastorage. java: 233) at Org. apache. hadoop. HDFS. server. datanode. datastorage. recovertransitionread (datastorage. java: 148) at Org. apache. hadoop. HDFS. server. datanode. datanode. startdatanode (datanode. java: 298) at Org. apache. hadoop. HDFS. server. datanode. datanode.< Init > (Datanode. java: 216) at Org. apache. hadoop. HDFS. server. datanode. datanode. makeinstance (datanode. java: 1283) at Org. apache. hadoop. HDFS. server. datanode. datanode. instantiatedatanode (datanode. java: 1238) at Org. apache. hadoop. HDFS. server. datanode. datanode. createdatanode (datanode. java: 1246) at Org. apache. hadoop. HDFS. server. datanode. datanode. main (datanode. java: 1368) 10:02:34, 145 info Org. apache. hadoop. HDFS. server. datanode. datanode: shutdown_msg: /*************************************** * ******************** shutdown_msg: shutting down datanode at xuwei-Laptop/127.0.1.1 ******************************* *****************************/
We can find that the contents in the temporary file directory are incorrect. Later I thought that my slave machine ran the standalone mode hadoop, and TMP-hadoop was formatted at that time. Therefore, we will delete TMP-hadoop and restart the hadoop service on the master. Everything works.