Hadoop startup node Datanode failure Solution

Source: Internet
Author: User
Tags xsl

Hadoop startup node Datanode failure Solution

When I dynamically Add a Hadoop slave node, a problem occurs:

[Root @ hadoop current] # hadoop-daemon.sh start datanode
Starting datanode, logging to/usr/local/hadoop1.1/libexec/../logs/hadoop-root-datanode-hadoop.out [root @ hadoop ~] # Jpsjps command found no datanode started, so it prompts the path to view the hadoop-root-datanode-hadoop.out file, can be blank. Later found in the path/usr/local/hadoop1.1/logs/hadoop-root-datanode-hadoop.log file to view the log file [root @ hadoop current] # vim/usr/local/hadoop1.1/logs/hadoop-root-datanode-hadoop.log
STARTUP_MSG: version = 1.1.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1-r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
**************************************** ********************/
19:24:28, 543 INFO org. apache. hadoop. metrics2.impl. MetricsConfig: loaded properties from hadoop-metrics2.properties
19:24:28, 565 INFO org. apache. hadoop. metrics2.impl. MetricsSourceAdapter: MBean for source MetricsSystem, sub = Stats registered.
19:24:28, 566 INFO org. apache. hadoop. metrics2.impl. MetricsSystemImpl: Scheduled snapshot period at 10 second (s ).
2014-10-31 19:24:28, 566 INFO org. apache. hadoop. metrics2.impl. MetricsSystemImpl: DataNode metrics system started
2014-10-31 19:24:28, 728 INFO org. apache. hadoop. metrics2.impl. MetricsSourceAdapter: MBean for source ugi registered.
19:24:29, 221 ERROR org. apache. hadoop. hdfs. server. datanode. dataNode: java. io. IOException: Incompatible namespaceIDs in/usr/local/hadoop/tmp/dfs/data: namenode namespaceID = 942590743; datanode namespaceID = 463031076
At org. apache. hadoop. hdfs. server. datanode. DataStorage. doTransition (DataStorage. java: 232)
At org. apache. hadoop. hdfs. server. datanode. DataStorage. recoverTransitionRead (DataStorage. java: 147)
At org. apache. hadoop. hdfs. server. datanode. DataNode. startDataNode (DataNode. java: 399)
At org. apache. hadoop. hdfs. server. datanode. DataNode. <init> (DataNode. java: 309)
At org. apache. hadoop. hdfs. server. datanode. DataNode. makeInstance (DataNode. java: 1651)
At org. apache. hadoop. hdfs. server. datanode. DataNode. instantiateDataNode (DataNode. java: 1590)
At org. apache. hadoop. hdfs. server. datanode. DataNode. createDataNode (DataNode. java: 1608)
At org. apache. hadoop. hdfs. server. datanode. DataNode. secureMain (DataNode. java: 1734)
At org. apache. hadoop. hdfs. server. datanode. DataNode. main (DataNode. java: 1751)

2014-10-31 19:24:29, 229 INFO org. apache. hadoop. hdfs. server. datanode. DataNode: SHUTDOWN_MSG:
/*************************************** *********************
SHUTDOWN_MSG: Shutting down DataNode at hadoop/192.168.0.100
**************************************** * *******************/Read log files: first, we can see the Incompatible word in the ERROT information, which means "Incompatible ". So we can see that the namespaceID of datanode has an error. So we finally shut down. Solution: (1) first go to the hadoop path under the configuration file hdfs-site.xml, look: [root @ hadoop current] # vim/usr/local/hadoop1.1/conf/hdfs-site.xml ...................... ........................................ ........................................ ............................... <? Xml version = "1.0"?>
<? Xml-stylesheet type = "text/xsl" href = "configuration. xsl"?>

<! -- Put site-specific property overrides in this file. -->

<Configuration>
<Property>
<Name> dfs. replication </name>
<Value> 1 </value>
</Property>
<Property>
<Name> dfs. permissions </name>
<Value> false </value>
</Property>
<Property>
<Name> dfs. name. dir </name>
<Value >$ {hadoop. tmp. dir}/dfs/name </value>
<Description> this is a comma-delimited list of directories
Then the name table is replicated in all of the directories,
For redunancy.
</Description>
</Property>
</Configuration> .................................... ........................................ ........................................ ................. there is no configuration information about datanode, if you have something similar to the following: <property>
<Name> dfs. data. dir </name>
<Value>/data/hdfs/data </value>
</Property> it indicates that your datanode configuration file is no longer in the default path, but in the path you have set yourself. (2) enter the dfs of datanode. data. change the file VERSION in the current directory of dir. Because I am the default directory, the path is/usr/local/hadoop/tmp/dfs/data/current/VERSION. Different versions may have different paths. You 'd better find them yourself. [Root @ hadoop current] # vim/usr/local/hadoop/tmp/dfs/data/current/VERSION ................ ........................................ ........................................ ..................................... # Thu Oct 30 04:52:01 PDT 2014
Namespace id = 463031076
StorageID = DS-1787154912-192.168.0.100-50010-1413940826285
CTime = 0
StorageType = DATA_NODE
LayoutVersion =-32 .................................... ........................................ ........................................ ................. look at the namespaceID = 463031076, you can find that, with the hadoop-root-datanode-hadoop.log of datanode namespaceID = 463031076, which means that he is reading this file, so we did not find the error. (3) modify this version information file VERSIONID and hadoop-root-datanode-hadoop.log namenode namespaceID = 942590743 consistent ps: I think you should be able to think of where namenode namespaceID is from: [root @ hadoop current] # vim/usr/local/hadoop/tmp/dfs/name/current/VERSION ................ ........................................ ........................................ ..................................... # Fri Oct 31 19:23:44 PDT 2014
Namespace id = 942590743
CTime = 0
StorageType = NAME_NODE
LayoutVersion =-32 .................................... ........................................ ........................................ ................. is the ID here consistent with namenode namespaceID = 942590743 in the hadoop-root-datanode-hadoop.log? (4) After the modification, re-run datanode [root @ hadoop current] # hadoop-daemon.sh start datanode [root @ hadoop current] # jps8581 DataNode to see DataNode, it indicates that it has been running.

Build a Hadoop environment on Ubuntu 13.04

Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1

Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)

Configuration of Hadoop environment in Ubuntu

Detailed tutorial on creating a Hadoop environment for standalone Edition

Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.