Problems encountered during Hadoop2.4.0 installation and Solutions

Source: Internet
Author: User

Problems encountered during Hadoop2.4.0 installation and Solutions

Datenode is not started after a start-dfs.sh is executed

View the log as follows:

20:34:59, 622 FATAL org. apache. hadoop. hdfs. server. datanode. dataNode: Initialization failed for block pool Block pool <registering> (Datanode Uuid unassigned) service to localhost/FIG: 9000
Java. io. IOException: Incompatible clusterIDs in/usr/local/hadoop/hdfs/data: namenode clusterID = CID-af6f15aa-efdd-479b-bf55-77270058e4f7; datanode clusterID = CID-736d1968-8fd1-4bc4-afef-5c72354c39ce
At org. apache. hadoop. hdfs. server. datanode. DataStorage. doTransition (DataStorage. java: 472)
At org. apache. hadoop. hdfs. server. datanode. DataStorage. recoverTransitionRead (DataStorage. java: 225)
At org. apache. hadoop. hdfs. server. datanode. DataStorage. recoverTransitionRead (DataStorage. java: 249)
At org. apache. hadoop. hdfs. server. datanode. DataNode. initStorage (DataNode. java: 929)
At org. apache. hadoop. hdfs. server. datanode. DataNode. initBlockPool (DataNode. java: 900)
At org. apache. hadoop. hdfs. server. datanode. BPOfferService. verifyAndSetNamespaceInfo (BPOfferService. java: 274)
At org. apache. hadoop. hdfs. server. datanode. BPServiceActor. connectToNNAndHandshake (BPServiceActor. java: 220)
At org. apache. hadoop. hdfs. server. datanode. BPServiceActor. run (BPServiceActor. java: 815)
At java. lang. Thread. run (Thread. java: 744)

The reason is that the clusterID of datanode does not match the namterid of namenode.

Open the directory corresponding to datanode and namenode configured in the hdfs-site.xml, open the VERSION in the current folder respectively, you can see that the clusterID items are as recorded in the log, indeed inconsistent, modify the clusterID of the VERSION file in datanode to be consistent with that in namenode, restart dfs (execute the start-dfs.sh), and then execute the jps command to see that datanode has started properly.

Cause of this problem: After formatting dfs for the first time, hadoop is started and used, and hdfs namenode-format is re-executed. At this time, the clusterID of namenode will be re-generated, the clusterID of datanode remains unchanged.

Install and configure Hadoop2.2.0 on CentOS

Build a Hadoop environment on Ubuntu 13.04

Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1

Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)

Configuration of Hadoop environment in Ubuntu

Detailed tutorial on creating a Hadoop environment for standalone Edition

Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.