Hadoop Datanode failed to start

Source: Internet
Author: User

Questions Guide:

1. How do I start looking at problems when Hadoop is having problems?
2, Datanode can not start, how should we solve?
3. How to add Datanode or Tasktracker dynamically?

First, the problem description
When I format the file system many times, such as

[Email protected]:/usr/local/hadoop-1.0. 2# Bin/hadoop Namenode-format

Copy Code


The Datanode will not start, view the log, and find the error:

(+)----------------501in155319143  1036135033

Second, the cause of the problem
When we perform file system formatting, a current/version file is saved in the Namenode Data folder (that is, in the configuration file Dfs.name.dir the path to the local system), the Namespaceid is recorded, the formatted Version of Namenode. If we format namenode frequently, the Current/version file that is saved in Datanode (that is, the path to the local system in the configuration file) is just the Namenode ID that you saved when you first formatted it, Therefore, the ID inconsistency between Datanode and Namenode is caused.

Iii. Solutions
Change the Namespaceid in the configuration file to current/version in the path of the local system to the same as Namenode. Dfs.data.dir.

If you're having trouble installing, or you're not running Hadoop after the steps have been installed, it's recommended to look at the log information carefully, Hadoop logs detailed log information, and the log file is saved in the Logs folder.
Hadoop has log files for analysis, whether it's a startup, a job in mapreduce that will be used frequently, and information about HDFs.
For example:
Namenode and Datanode Namespaceid inconsistent, this error is many people will encounter during the installation, the log information is:

 in/root/tmp/dfs/1307672299389959598

If HDFs has not been started, the reader can query the log, and through the log analysis, the above information shows the Namenode and Datanode Namespaceid inconsistencies.
This problem is generally due to two or more than two format namenode, there are two ways to solve, the first method is to delete all the data Datanode (and the cluster in the/hdfs/data/current of each datanode to delete the version, Then perform Hadoop Namenode-format reboot the cluster and the error disappears. < recommended >); the second method is to modify the Namespaceid (located in the/hdfs/data/current/version file) of each Datanode < precedence > or modify Namenode's Namespaceid (located in the/hdfs/name/current/version file) to make it consistent.
The following two methods may also be used in practical applications.
1) Restart the broken Datanode or Jobtracker. When a problem occurs in a single node of a Hadoop cluster, it is generally not necessary to restart the entire system, just restart the node and it will automatically connect to the entire cluster.
Enter the following command on the necrotic node:

bin/hadoop-daemon.sh start datanodebin/hadoop-daemon.sh start Jobtracker


2) Add Datanode or Tasktracker dynamically. This command allows the user to dynamically add a node to the cluster.

bin/hadoop-daemon.sh--config./conf start datanodebin/hadoop-daemon.sh--config./conf start Tasktracker

Hadoop Datanode failed to start

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.