In Hadoop-root-datanode-ubuntu.log:2015-03-12 23:52:33,671 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode:Initialization failed for Block Pool <registering> (Datanode Uuid Unassigned) service to localhost/127.0.0.1:9000. Exiting.
java.io.IOException:Incompatible clusterids in/hdfs/name/dfs/data:
Namenode clusterid = cid-70d64aad-1dfe-4f87-af15-d53ff80db3dd; Datanode Clusterid = Cid-388a9ec6-cb87-4b0d-97c4-3b4d5c787b76
At org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition (datastorage.java:646)
At org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations (datastorage.java:320)
At Org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (datastorage.java:403)
At Org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (datastorage.java:422)
At org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage (datanode.java:1311)
At Org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool (datanode.java:1276)
At Org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo (bpofferservice.java:314)
At Org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake (bpserviceactor.java:220)
At Org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run (bpserviceactor.java:828)
At Java.lang.Thread.run (thread.java:745)
2015-03-12 23:52:33,680 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:Ending block Pool service For:block Pool <registering> (Datanode Uuid Unassigned) service to localhost/127.0.0.1:9000
2015-03-12 23:52:33,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:Removed Block Pool <registering > (Datanode Uuid Unassigned)
2015-03-12 23:52:35,790 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:Exiting datanode
2015-03-12 23:52:35,791 INFO org.apache.hadoop.util.ExitUtil:Exiting with status 0
2015-03-12 23:52:35,792 INFO Org.apache.hadoop.hdfs.server.datanode.DataNode:SHUTDOWN_MSG:
/************************************************************
shutdown_msg:shutting down DataNode at ubuntu/127.0.1.1
************************************************************/The reason: Namenode and Datanode Clusterid no longer match after reformatting Namenode, Datanode cannot start. Additionally: This error causes the following error to occur when hive imports data (because metadata does not exist in HDFs, create table does not have an error):hive> Load Data local inpath '/root/dbfile ' overwrite into table employees PARTITION (country= ' US ', state= ' IL ');
Loading data to table Default.employees partition (Country=us, State=il)
Failed with exception unable to move source file:/root/dbfile to destination hdfs://localhost:9000/user/hive/ Warehouse/employees/country=us/state=il/dbfile
failed:execution Error, return code 1 from Org.apache.hadoop.hive.ql.exec.MoveTasWORKAROUND: Delete the directory where HDFs stores the data and reformat HDFs (related parameters:Dfs.name.dir Dfs.data.dir):Hadoop Namenode-format
HDFs Datanode failed to start