Format aborted in/data0/hadoop-name

Source: Internet
Author: User
Tags hadoop fs
[User6 @ das0 hadoop-0.20.203.0] $ bin/hadoop namenode-format12/02/20 14:05:17 info namenode. namenode: startup_msg: Re-format filesystem in/data0/hadoop-name? (Y or N) yformat aborted in/data0/hadoop-name12/02/20 14:05:20 info namenode. namenode: shutdown_msg: Then hadoop was started and it was found that http: // das0: 5007 could not be displayed. Delete the entire/data0/hadoop-name folder. And then, success !!![Zhangpeng6 @ das0 hadoop-0.20.203.0] $ bin/hadoop namenode-format12/02/20 14:09:57 info namenode. namenode: startup_msg: 12/02/20 14:09:57 info util. gset: VM type   = 64-bit12/02/20 14:09:57 info util. gset: 2% max memory = 177.77875mb12/02/20 14:09:57 info util. gset: Capacity   = 2 ^ 24 = 16777216 entries12/02/20 14:09:57 info util. gset: Recommended = 16777216, actual = 1677721612/02/20 14:09:57 info namenode. fsnamesystem: fsowner = zhangpeng612/02/20 14:09:57 info namenode. fsnamesystem: supergroup = supergroup12/02/20 14:09:57 info namenode. fsnamesystem: ispermissionenabled = true12/02/20 14:09:57 info namenode. fsnamesystem: DFS. block. invalidate. limit = 10012/02/20 14:09:57 info namenode. fsnamesystem: isaccesstokenenenabled = false accesskeyupdateinterval = 0 min (s), accesstokenlifetime = 0 min (s) 12/02/20 14:09:57 info namenode. namenode: caching file namesoccuring more than 10 times 12/02/20 14:09:57 info common. storage: Image File of size 116 saved in 0 seconds.12/02/20 14:09:57 info common. storage: storage directory/data0/hadoop-name/namenode has been successfully formatted.12/02/20 14:09:57 info namenode. namenode: shutdown_msg: Summary:Before formatting namenode, make sure that the directory specified by DFS. Name. dir does not exist.

Hadoop is used to prevent the existing cluster from being incorrectly formatted.

After starting SSH, start hadoop again, and then use the bin/hadoop FS-LS command, there is always an error similar to the following.

10/03/15 19:43:10 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 9000. Already tried 0 time (s ).

......
10/03/15 19:43:19 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 9000. Already tried 9 time (s ).

Bad connection to FS. Command aborted.

At this point, if the stop-all.sh, you will find a statement like no namenode to stop.

Then I checked the namenode log. The error is as follows:

10:55:17, 655 info org. Apache. hadoop. Metrics. metricsutil: unable to obtain hostname
Java.net. unknownhostexception: chjjun: Unknown name or service
At java.net. inetaddress. getlocalhost (inetaddress. Java: 1438)
At org. Apache. hadoop. Metrics. metricsutil. gethostname (metricsutil. Java: 95)
At org. Apache. hadoop. Metrics. metricsutil. createrecord (metricsutil. Java: 84)
At org. Apache. hadoop. Metrics. JVM. jv1_rics. <init> (jv1_rics. Java: 87)
At org. Apache. hadoop. Metrics. JVM. jv1_rics. INIT (jv1_rics. Java: 78)
At org. Apache. hadoop. Metrics. JVM. jv1_rics. INIT (jv1_rics. Java: 65)
At org. Apache. hadoop. HDFS. server. namenode. Metrics. namenodemetrics. <init> (namenodemetrics. Java: 103)
At org. Apache. hadoop. HDFS. server. namenode. namenode. initmetrics (namenode. Java: 199)
At org. Apache. hadoop. HDFS. server. namenode. namenode. initialize (namenode. Java: 302)
At org. Apache. hadoop. HDFS. server. namenode. namenode. <init> (namenode. Java: 433)
At org. Apache. hadoop. HDFS. server. namenode. namenode. <init> (namenode. Java: 421)
At org. Apache. hadoop. HDFS. server. namenode. namenode. createnamenode (namenode. Java: 1359)
At org. Apache. hadoop. HDFS. server. namenode. namenode. Main (namenode. Java: 1368)
Caused by: java.net. unknownhostexception: chjjun: Unknown name or service
At java.net. inet6addressimpl. lookupallhostaddr (native method)
At java.net. inetaddress $1. lookupallhostaddr (inetaddress. Java: 866)
At java.net. inetaddress. getaddressesfromnameservice (inetaddress. Java: 1258)
At java.net. inetaddress. getlocalhost (inetaddress. Java: 1434)

That is to say the unknown host name, and then the Internet looked up, found the article: http://lxy2330.iteye.com/blog/1112806

Then I can view the content in my/etc/hosts as follows:

127.0.0.1 localhost. localdomain localhost

: 1 localhost6.localhost6domain localhost6

Let's take a look at the content in/etc/sysconfig/network.

Hostname = chjjun found

In that article, the hostname cannot be resolved because it does not have a corresponding IP address in hosts. Therefore, replace chjjun with localhost.

Then #/etc/rc. d/init. d/network restart to restart the network service.

Then use hadoop namenode-format

However, the above error still occurs.

Finally, there is no way to restart the computer and then restart ssh. Start hadoop.

Use bin/hadoop FS-ls

An error is returned.

However, this error is different from the previous one, and the name of the log file used has also changed. The previous log is written in:

Hadoop-chjjun-namenode-chjjun.log

The current log file name is changed:

Hadoop-chjjun-namenode-localhost.log

The error content has also changed:

Java. Io. eofexception
At java. Io. randomaccessfile. readint (randomaccessfile. Java: 776)
At org. Apache. hadoop. HDFS. server. namenode. fsimage. isconversionneeded (fsimage. Java: 836)
At org. Apache. hadoop. HDFS. server. Common. Storage. checkconversionneeded (storage. Java: 697)
At org. Apache. hadoop. HDFS. server. Common. Storage. Access $000 (storage. Java: 62)
At org. Apache. hadoop. HDFS. server. Common. Storage $ storagedirectory. analyzestorage (storage. Java: 476)
At org. Apache. hadoop. HDFS. server. namenode. fsimage. recovertransitionread (fsimage. Java: 402)
At org. Apache. hadoop. HDFS. server. namenode. fsdirectory. loadfsimage (fsdirectory. Java: 110)
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. initialize (fsnamesystem. Java: 291)
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. <init> (fsnamesystem. Java: 270)
At org. Apache. hadoop. HDFS. server. namenode. namenode. loadnamesystem (namenode. Java: 271)
At org. Apache. hadoop. HDFS. server. namenode. namenode. initialize (namenode. Java: 303)
At org. Apache. hadoop. HDFS. server. namenode. namenode. <init> (namenode. Java: 433)
At org. Apache. hadoop. HDFS. server. namenode. namenode. <init> (namenode. Java: 421)
At org. Apache. hadoop. HDFS. server. namenode. namenode. createnamenode (namenode. Java: 1359)
At org. Apache. hadoop. HDFS. server. namenode. namenode. Main (namenode. Java: 1368)
13:46:48, 767 error org. Apache. hadoop. HDFS. server. namenode. namenode: Java. Io. eofexception
At java. Io. randomaccessfile. readint (randomaccessfile. Java: 776)
At org. Apache. hadoop. HDFS. server. namenode. fsimage. isconversionneeded (fsimage. Java: 836)
At org. Apache. hadoop. HDFS. server. Common. Storage. checkconversionneeded (storage. Java: 697)
At org. Apache. hadoop. HDFS. server. Common. Storage. Access $000 (storage. Java: 62)
At org. Apache. hadoop. HDFS. server. Common. Storage $ storagedirectory. analyzestorage (storage. Java: 476)
At org. Apache. hadoop. HDFS. server. namenode. fsimage. recovertransitionread (fsimage. Java: 402)
At org. Apache. hadoop. HDFS. server. namenode. fsdirectory. loadfsimage (fsdirectory. Java: 110)
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. initialize (fsnamesystem. Java: 291)
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. <init> (fsnamesystem. Java: 270)
At org. Apache. hadoop. HDFS. server. namenode. namenode. loadnamesystem (namenode. Java: 271)
At org. Apache. hadoop. HDFS. server. namenode. namenode. initialize (namenode. Java: 303)
At org. Apache. hadoop. HDFS. server. namenode. namenode. <init> (namenode. Java: 433)
At org. Apache. hadoop. HDFS. server. namenode. namenode. <init> (namenode. Java: 421)
At org. Apache. hadoop. HDFS. server. namenode. namenode. createnamenode (namenode. Java: 1359)
At org. Apache. hadoop. HDFS. server. namenode. namenode. Main (namenode. Java: 1368)

There seems to be some hope.

Then run the bin/hadoop FS-format command.

After the command is executed, hadoop can run it.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.