DFS namenode format causes datenode to be unable to be connected

Source: Internet
Author: User

 

Problem

Hadoop @ potr134pc26:/usr/local/hadoop/bin $ Rm-R

/Usr/local/hadoop-datastore/
---- Now there is no HADOOP-DATASTORE folder locally
Hadoop @ potr134pc26:/usr/local/hadoop/bin $./hadoop namenode-format
10/02/10 16:33:50 info namenode. namenode: startup_msg:
/*************************************** *********************
Startup_msg: Starting namenode
Startup_msg: host = potr134pc26/127.0.0.1
Startup_msg: ARGs = [-format]
Startup_msg: version = 0.20.1
Startup_msg: Build =
Http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1-R

810220; compiled by 'oom 'on TUE Sep 1 20:55:56 UTC 2009
**************************************** ********************/
Re-format filesystem in
/Home/hadoop-datastore/hadoop-hadoop/dfs/name? (Y or N) y
10/02/10 16:33:54 info namenode. fsnamesystem: fsowner = hadoop, hadoop
10/02/10 16:33:54 info namenode. fsnamesystem: supergroup = supergroup
10/02/10 16:33:54 info namenode. fsnamesystem: ispermissionenabled = true
10/02/10 16:33:54 info common. Storage: Image File of size 96 saved in 0
Seconds.
10/02/10 16:33:54 info common. Storage: storage directory
/Home/hadoop-datastore/hadoop-hadoop/dfs/name has been successfully
Formatted.
10/02/10 16:33:54 info namenode. namenode: shutdown_msg:
/*************************************** *********************
Shutdown_msg: Shutting Down namenode at potr134pc26/127.0.0.1
**************************************** ********************/
Hadoop @ potr134pc26:/usr/local/hadoop/bin $./start-all.sh
Starting namenode, logging
/Usr/local/hadoop/bin/../logs/hadoop-hadoop-namenode-potr134pc26.out
Localhost: Starting datanode, logging
/Usr/local/hadoop/bin/../logs/hadoop-hadoop-datanode-potr134pc26.out
Localhost: Starting secondarynamenode, logging
/Usr/local/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-potr134pc26.out
Starting jobtracker, logging
/Usr/local/hadoop/bin/../logs/hadoop-hadoop-jobtracker-potr134pc26.out
Localhost: Starting tasktracker, logging
/Usr/local/hadoop/bin/../logs/hadoop-hadoop-tasktracker-potr134pc26.out

Hadoop @ potr134pc26:/usr/local/hadoop/bin $ JPs
JPS 27461
27354 tasktracker
27158 secondarynamenode
27250 jobtracker
26923 namenode
Hadoop @ potr134pc26:/usr/local/hadoop/bin $./hadoop dfsadmin-Report
Configured capacity: 0 (0 KB)
Present capacity: 0 (0 KB)
DFS remaining: 0 (0 KB)
DFS used: 0 (0 KB)
DFS used %: %
Under replicated blocks: 0
Blocks with primary upt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

--- (At this point when I checked the log the datanodes still wasn' t up
And running )----------
Mkdir/usr/local/hadoop-datastore
Hadoop @ potr134pc26:/usr/local/hadoop/bin $./stop-all.sh
Stopping jobtracker
Localhost: Stopping tasktracker
Stopping namenode
Localhost: No datanode to stop
Localhost: Stopping secondarynamenode
Hadoop @ potr134pc26:/usr/local/hadoop/bin $./start-all.sh
Starting namenode, logging
/Usr/local/hadoop/bin/../logs/hadoop-hadoop-namenode-potr134pc26.out
Localhost: Starting datanode, logging
/Usr/local/hadoop/bin/../logs/hadoop-hadoop-datanode-potr134pc26.out
Localhost: Starting secondarynamenode, logging
/Usr/local/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-potr134pc26.out
Starting jobtracker, logging
/Usr/local/hadoop/bin/../logs/hadoop-hadoop-jobtracker-potr134pc26.out
Localhost: Starting tasktracker, logging
/Usr/local/hadoop/bin/../logs/hadoop-hadoop-tasktracker-potr134pc26.out
Hadoop @ potr134pc26:/usr/local/hadoop/bin $ JPs
28038 namenode
JPS 28536
28154 datanode
28365 jobtracker
28470 tasktracker
28272 secondarynamenode

./Hadoop DFS-copyfromlocal/home/hadoop/desktop/*. txt txtinput
Copyfromlocal: 'txtinput': specified destination directory doest not exist
Hadoop @ potr134pc26:/usr/local/hadoop/bin $./hadoop DFS-mkdir txtinput
Hadoop @ potr134pc26:/usr/local/hadoop/bin $./hadoop DFS-copyfromlocal

/Home/hadoop/desktop/*. txt txtinput
10/02/10 16:44:36 warn HDFS. dfsclient: datastreamer exception:
Org. Apache. hadoop. IPC. RemoteException: Java. Io. ioexception: File
/User/hadoop/txtinput/202.16.txt cocould only be replicated to 0 nodes,
Instead of 1
At
Org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. getadditionalblock (fsnamesystem. Java: 1267)
At
Org. Apache. hadoop. HDFS. server. namenode. namenode. addblock (namenode. Java: 422)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At
Sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At
Sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. IPC. RPC $ server. Call (RPC. Java: 508)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 959)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 955)
At java. Security. accesscontroller. doprivileged (native method)
At javax. Security. Auth. Subject. DOAs (subject. Java: 396)
At org. Apache. hadoop. IPC. Server $ handler. Execute (server. Java: 953)

At org. Apache. hadoop. IPC. Client. Call (client. Java: 739)
At org. Apache. hadoop. IPC. RPC $ invoker. Invoke (rpc. Java: 220)
At $ proxy0.addblock (unknown source)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At
Sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At
Sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At
Org. Apache. hadoop. Io. Retry. retryinvocationhandler. invokemethod (retryinvocationhandler. Java: 82)
At
Org. Apache. hadoop. Io. Retry. retryinvocationhandler. Invoke (retryinvocationhandler. Java: 59)
At $ proxy0.addblock (unknown source)
At
Org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. locatefollowingblock (dfsclient. Java: 2904)
At
Org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. nextblockoutputstream (dfsclient. Java: 2786)
At
Org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. Access $2000 (dfsclient. Java: 2076)
At
Org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream $ datastreamer. Run (dfsclient. Java: 2262)

10/02/10 16:44:36 warn HDFS. dfsclient: error recovery for block null bad
Datanode [0] nodes = NULL
10/02/10 16:44:36 warn HDFS. dfsclient: cocould not get block locations.
Source File "/user/hadoop/txtinput/202.16.txt"-Aborting...
10/02/10 16:44:36 warn HDFS. dfsclient: datastreamer exception:
Org. Apache. hadoop. IPC. RemoteException: Java. Io. ioexception: File
/User/hadoop/txtinput/7ldvc10.txt cocould only be replicated to 0 nodes,
Instead of 1
At
Org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. getadditionalblock (fsnamesystem. Java: 1267)
At
Org. Apache. hadoop. HDFS. server. namenode. namenode. addblock (namenode. Java: 422)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At
Sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At
Sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. IPC. RPC $ server. Call (RPC. Java: 508)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 959)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 955)
At java. Security. accesscontroller. doprivileged (native method)
At javax. Security. Auth. Subject. DOAs (subject. Java: 396)
At org. Apache. hadoop. IPC. Server $ handler. Execute (server. Java: 953)

At org. Apache. hadoop. IPC. Client. Call (client. Java: 739)
At org. Apache. hadoop. IPC. RPC $ invoker. Invoke (rpc. Java: 220)
At $ proxy0.addblock (unknown source)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At
Sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At
Sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At
Org. Apache. hadoop. Io. Retry. retryinvocationhandler. invokemethod (retryinvocationhandler. Java: 82)
At
Org. Apache. hadoop. Io. Retry. retryinvocationhandler. Invoke (retryinvocationhandler. Java: 59)
At $ proxy0.addblock (unknown source)
At
Org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. locatefollowingblock (dfsclient. Java: 2904)
At
Org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. nextblockoutputstream (dfsclient. Java: 2786)
At
Org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. Access $2000 (dfsclient. Java: 2076)
At
Org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream $ datastreamer. Run (dfsclient. Java: 2262)

10/02/10 16:44:36 warn HDFS. dfsclient: error recovery for block null bad
Datanode [0] nodes = NULL
10/02/10 16:44:36 warn HDFS. dfsclient: cocould not get block locations.
Source File "/user/hadoop/txtinput/7ldvc10.txt"-Aborting...
Copyfromlocal: Java. Io. ioexception: File/user/hadoop/txtinput/202.16.txt
Cocould only be replicated to 0 nodes, instead of 1
At
Org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. getadditionalblock (fsnamesystem. Java: 1267)
At
Org. Apache. hadoop. HDFS. server. namenode. namenode. addblock (namenode. Java: 422)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At
Sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At
Sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. IPC. RPC $ server. Call (RPC. Java: 508)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 959)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 955)
At java. Security. accesscontroller. doprivileged (native method)
At javax. Security. Auth. Subject. DOAs (subject. Java: 396)
At org. Apache. hadoop. IPC. Server $ handler. Execute (server. Java: 953)

Java. Io. ioexception: File/user/hadoop/txtinput/7ldvc10.txt cocould only be
Replicated to 0 nodes, instead of 1
At
Org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. getadditionalblock (fsnamesystem. Java: 1267)
At
Org. Apache. hadoop. HDFS. server. namenode. namenode. addblock (namenode. Java: 422)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At
Sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At
Sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. IPC. RPC $ server. Call (RPC. Java: 508)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 959)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 955)
At java. Security. accesscontroller. doprivileged (native method)
At javax. Security. Auth. Subject. DOAs (subject. Java: 396)
At org. Apache. hadoop. IPC. Server $ handler. Execute (server. Java: 953)

Hadoop @ potr134pc26:/usr/local/hadoop/bin $./hadoop dfsadmin-Report
Configured capacity: 0 (0 KB)
Present capacity: 0 (0 KB)
DFS remaining: 0 (0 KB)
DFS used: 0 (0 KB)
DFS used %: %
Under replicated blocks: 0
Blocks with primary upt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

On wed, 10 Feb 2010, E. Sammer wrote:

> On 2/10/10 pm, Nick Klosterman wrote:
> It appears I have incompatible namespaceids. Any thoughts on how
> Resolve that?
> This is what the full datanodes log is saying:
>
> Was this data node part of a another DFS cluster at some point? It looks like
> You 've reformatted the Name node since the datanode connected to it.
> Datanode will refuse to connect to a namenode with a different namespaceid
> Because the data node wocould have blocks (possibly with the same IDs) from
> Another cluster. It's a stop gap safety mechanic. You 'd have to destroy
> Data directory on the data node to "reinitialize" it so it picks up the new
> Namespaceid from the Name node at which point it will be allowed to connect.
>
> Just to be clear, this will also kill all data that was stored on the data
> Node, so don't do this lightly.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.