Hadoop exception inconsistent checkpoint fields

Source: Internet
Author: User

Hadoop second Namenode exception inconsistent checkpoint fields


No traffic, Namenode process: CPU 100%; Memory usage is too much; no error log;


Secondarynamenode Error:

java.io.ioexception: inconsistent checkpoint fields.lv = -57 namespaceid =  371613059 cTime = 0 ; clusterId =  cid-b8a5f273-515a-434c-87c0-4446d4794c85 ; blockpoolid =  bp-1082677108-127.0.0.1-1433842542163.expecting respectively: -57; 1687946377; 0;  cid-603ff285-de5a-41a0-85e8-f033ea1916fc; bp-2591078-127.0.0.1-1433770362761.         at org.apache.hadoop.hdfs.server.namenode.checkpointsignature.validatestorageinfo (checkpointsignature.java:134)         at  Org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint (secondarynamenode.java:531)          at org.apache.hadoop.hdfs.server.namenode.secondarynamenode.dowork ( secondarynamenode.java:395)         at  Org.apache.hadoop.hdfs.server.Namenode. Secondarynamenode$1.run (secondarynamenode.java:361)         at  Org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal (securityutil.java:411)          at org.apache.hadoop.hdfs.server.namenode.secondarynamenode.run ( secondarynamenode.java:357)         at java.lang.thread.run ( thread.java:662)


There are a number of reasons for this exception, one of which is: Second Namenode's data directory is inconsistent with the current version of the data

Workaround:

Manually delete the files under the second Nodenode directory, and then restart Hadoop:


Query found second Namenode under the edit log unexpectedly is a long time ago:

/opt/hadoop-2.5.1/dfs/tmp/dfs/namesecondary/current


[[email protected] current]# lltotal 116-rw-r--r-- 1 root root    42 jun  8  2015 edits_0000000000000000001-0000000000000000002-rw-r--r--  1 root root 8991 jun  8  2015 edits_ 0000000000000000003-0000000000000000089-rw-r--r-- 1 root root 4370 jun  8   2015 edits_0000000000000000090-0000000000000000123-rw-r--r-- 1 root root  3817 jun  9  2015 edits_0000000000000000124-0000000000000000152-rw-r--r-- 1  root root 2466 jun  9  2015 edits_ 0000000000000000153-0000000000000000172-rw-r--r-- 1 root root 2466 jun  9   2015 edits_0000000000000000173-0000000000000000192-rw-r--r-- 1 root root  2466 jun  9  2015 edits_0000000000000000193-0000000000000000212-rw-r--r-- 1 root root 2466 jun  9  2015  edits_0000000000000000213-0000000000000000232-rw-r--r-- 1 root root 2466 jun   9  2015 edits_0000000000000000233-0000000000000000252-rw-r--r-- 1 root  root 2466 jun  9  2015 edits_0000000000000000253-0000000000000000272- Rw-r--r-- 1 root root 2466 jun  9  2015 edits_ 0000000000000000273-0000000000000000292-rw-r--r-- 1 root root 2466 jun  9   2015 edits_0000000000000000293-0000000000000000312-rw-r--r-- 1 root root  2466 jun  9  2015 edits_0000000000000000313-0000000000000000332-rw-r--r-- 1  root root 2466 jun  9  2015 edits_ 0000000000000000333-0000000000000000352-rw-r--r-- 1 root root 2466 jun  9  2015 edits_0000000000000000353-0000000000000000372 -rw-r--r-- 1 root root 2466 jun  9  2015 edits_ 0000000000000000373-0000000000000000392-rw-r--r-- 1 root root 2466 jun  9   2015 edits_0000000000000000393-0000000000000000412-rw-r--r-- 1 root root  6732 jun  9  2015 edits_0000000000000000413-0000000000000000468-rw-r--r-- 1  root root 4819 jun  9  2015 edits_ 0000000000000000469-0000000000000000504-rw-r--r-- 1 root root 2839 jun  9   2015 fsimage_0000000000000000468-rw-r--r-- 1 root root   62  Jun  9  2015 fsimage_0000000000000000468.md5-rw-r--r-- 1 root  Root 2547 jun  9  2015 fsimage_0000000000000000504-rw-r--r-- 1 root root   62 jun  9   2015 fsimage_0000000000000000504.md5-rw-r--r-- 1 root root  199 jun   9  2015 version


The above problem resolution is configured in the case of Hadoop.tmp.dir, if not configured, you cannot find the edit log file, need to configure, in the Hdfs-site.xml or core-site.xml configuration;

The Hadoop.tmp.dir configuration parameter specifies the default temporary path for HDFs, which is best configured to delete the TMP directory in this file if the new node or otherwise inexplicably Datanode does not start. However, if you delete this directory for the Namenode machine, you will need to re-execute the namenode formatted command.


Hadoop exception inconsistent checkpoint fields

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.