Exception Analysis
1. "cocould only be replicated to 0 nodes, instead of 1" Exception
(1) exception description
The configuration above is correct and the following steps have been completed:
[Root @ localhost hadoop-0.20.0] # bin/hadoop namenode-format
[Root @ localhost hadoop-0.20.0] # bin/start-all.sh
At this time, we can see that the five processes jobtracker, tasktracker, namenode, datanode, and secondarynamenode have provided the successful startup information. However, when running the JPS command to view the process, we found that this was not the case, as shown below:
JPS 4281
4007 secondarynamenode
3771 namenode
It can be seen that only two processes are successfully started, and the others are not successful. If you continue to execute the following command, run the upload file command before running the wordcount instance:
[Root @ localhost hadoop-0.20.0] # bin/hadoop FS-put input in
Now a bunch of exceptions are thrown, as shown below:
10/08/02 15:36:04 info HDFS. dfsclient: Org. apache. hadoop. IPC. remoteException: Java. io. ioexception: File/user/root/In/license.txt cocould only be replicated to 0 nodes, instead of 1
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. getadditionalblock (fsnamesystem. Java: 1256)
At org. Apache. hadoop. HDFS. server. namenode. namenode. addblock (namenode. Java: 422)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. IPC. RPC $ server. Call (RPC. Java: 508)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 959)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 955)
At java. Security. accesscontroller. doprivileged (native method)
At javax. Security. Auth. Subject. DOAs (subject. Java: 396)
At org. Apache. hadoop. IPC. Server $ handler. Execute (server. Java: 953)
At org. Apache. hadoop. IPC. Client. Call (client. Java: 739)
At org. Apache. hadoop. IPC. RPC $ invoker. Invoke (rpc. Java: 220)
At $ proxy0.addblock (unknown source)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. Io. Retry. retryinvocationhandler. invokemethod (retryinvocationhandler. Java: 82)
At org. Apache. hadoop. Io. Retry. retryinvocationhandler. Invoke (retryinvocationhandler. Java: 59)
At $ proxy0.addblock (unknown source)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. locatefollowingblock (dfsclient. Java: 2873)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. nextblockoutputstream (dfsclient. Java: 2755)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. Access $2000 (dfsclient. Java: 2046)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream $ datastreamer. Run (dfsclient. Java: 2232)
10/08/02 15:36:04 warn HDFS. dfsclient: notreplicatedyetexception sleeping/user/root/In/license.txt retries left 4
10/08/02 15:36:04 info HDFS. dfsclient: Org. apache. hadoop. IPC. remoteException: Java. io. ioexception: File/user/root/In/license.txt cocould only be replicated to 0 nodes, instead of 1
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. getadditionalblock (fsnamesystem. Java: 1256)
At org. Apache. hadoop. HDFS. server. namenode. namenode. addblock (namenode. Java: 422)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. IPC. RPC $ server. Call (RPC. Java: 508)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 959)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 955)
At java. Security. accesscontroller. doprivileged (native method)
At javax. Security. Auth. Subject. DOAs (subject. Java: 396)
At org. Apache. hadoop. IPC. Server $ handler. Execute (server. Java: 953)
At org. Apache. hadoop. IPC. Client. Call (client. Java: 739)
At org. Apache. hadoop. IPC. RPC $ invoker. Invoke (rpc. Java: 220)
At $ proxy0.addblock (unknown source)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. Io. Retry. retryinvocationhandler. invokemethod (retryinvocationhandler. Java: 82)
At org. Apache. hadoop. Io. Retry. retryinvocationhandler. Invoke (retryinvocationhandler. Java: 59)
At $ proxy0.addblock (unknown source)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. locatefollowingblock (dfsclient. Java: 2873)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. nextblockoutputstream (dfsclient. Java: 2755)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. Access $2000 (dfsclient. Java: 2046)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream $ datastreamer. Run (dfsclient. Java: 2232)
10/08/02 15:36:04 warn HDFS. dfsclient: notreplicatedyetexception sleeping/user/root/In/license.txt retries left 3
10/08/02 15:36:05 info HDFS. dfsclient: Org. apache. hadoop. IPC. remoteException: Java. io. ioexception: File/user/root/In/license.txt cocould only be replicated to 0 nodes, instead of 1
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. getadditionalblock (fsnamesystem. Java: 1256)
At org. Apache. hadoop. HDFS. server. namenode. namenode. addblock (namenode. Java: 422)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. IPC. RPC $ server. Call (RPC. Java: 508)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 959)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 955)
At java. Security. accesscontroller. doprivileged (native method)
At javax. Security. Auth. Subject. DOAs (subject. Java: 396)
At org. Apache. hadoop. IPC. Server $ handler. Execute (server. Java: 953)
At org. Apache. hadoop. IPC. Client. Call (client. Java: 739)
At org. Apache. hadoop. IPC. RPC $ invoker. Invoke (rpc. Java: 220)
At $ proxy0.addblock (unknown source)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. Io. Retry. retryinvocationhandler. invokemethod (retryinvocationhandler. Java: 82)
At org. Apache. hadoop. Io. Retry. retryinvocationhandler. Invoke (retryinvocationhandler. Java: 59)
At $ proxy0.addblock (unknown source)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. locatefollowingblock (dfsclient. Java: 2873)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. nextblockoutputstream (dfsclient. Java: 2755)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. Access $2000 (dfsclient. Java: 2046)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream $ datastreamer. Run (dfsclient. Java: 2232)
10/08/02 15:36:05 warn HDFS. dfsclient: notreplicatedyetexception sleeping/user/root/In/license.txt retries left 2
10/08/02 15:36:07 info HDFS. dfsclient: Org. apache. hadoop. IPC. remoteException: Java. io. ioexception: File/user/root/In/license.txt cocould only be replicated to 0 nodes, instead of 1
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. getadditionalblock (fsnamesystem. Java: 1256)
At org. Apache. hadoop. HDFS. server. namenode. namenode. addblock (namenode. Java: 422)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. IPC. RPC $ server. Call (RPC. Java: 508)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 959)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 955)
At java. Security. accesscontroller. doprivileged (native method)
At javax. Security. Auth. Subject. DOAs (subject. Java: 396)
At org. Apache. hadoop. IPC. Server $ handler. Execute (server. Java: 953)
At org. Apache. hadoop. IPC. Client. Call (client. Java: 739)
At org. Apache. hadoop. IPC. RPC $ invoker. Invoke (rpc. Java: 220)
At $ proxy0.addblock (unknown source)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. Io. Retry. retryinvocationhandler. invokemethod (retryinvocationhandler. Java: 82)
At org. Apache. hadoop. Io. Retry. retryinvocationhandler. Invoke (retryinvocationhandler. Java: 59)
At $ proxy0.addblock (unknown source)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. locatefollowingblock (dfsclient. Java: 2873)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. nextblockoutputstream (dfsclient. Java: 2755)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. Access $2000 (dfsclient. Java: 2046)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream $ datastreamer. Run (dfsclient. Java: 2232)
10/08/02 15:36:07 warn HDFS. dfsclient: notreplicatedyetexception sleeping/user/root/In/license.txt retries left 1
10/08/02 15:36:10 warn HDFS. dfsclient: datastreamer exception: Org. apache. hadoop. IPC. remoteException: Java. io. ioexception: File/user/root/In/license.txt cocould only be replicated to 0 nodes, instead of 1
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. getadditionalblock (fsnamesystem. Java: 1256)
At org. Apache. hadoop. HDFS. server. namenode. namenode. addblock (namenode. Java: 422)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. IPC. RPC $ server. Call (RPC. Java: 508)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 959)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 955)
At java. Security. accesscontroller. doprivileged (native method)
At javax. Security. Auth. Subject. DOAs (subject. Java: 396)
At org. Apache. hadoop. IPC. Server $ handler. Execute (server. Java: 953)
At org. Apache. hadoop. IPC. Client. Call (client. Java: 739)
At org. Apache. hadoop. IPC. RPC $ invoker. Invoke (rpc. Java: 220)
At $ proxy0.addblock (unknown source)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 39)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 25)
At java. Lang. Reflect. method. Invoke (method. Java: 597)
At org. Apache. hadoop. Io. Retry. retryinvocationhandler. invokemethod (retryinvocationhandler. Java: 82)
At org. Apache. hadoop. Io. Retry. retryinvocationhandler. Invoke (retryinvocationhandler. Java: 59)
At $ proxy0.addblock (unknown source)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. locatefollowingblock (dfsclient. Java: 2873)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. nextblockoutputstream (dfsclient. Java: 2755)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream. Access $2000 (dfsclient. Java: 2046)
At org. Apache. hadoop. HDFS. dfsclient $ dfsoutputstream $ datastreamer. Run (dfsclient. Java: 2232)
10/08/02 15:36:10 warn HDFS. dfsclient: error recovery for block null bad datanode [0] nodes = NULL
10/08/02 15:36:10 warn HDFS. dfsclient: cocould not get block locations. source file "/user/root/In/license.txt"-Aborting...
Put: Java. Io. ioexception: File/user/root/In/license.txt cocould only be replicated to 0 nodes, instead of 1
When you see the statement "cocould only be replicated to 0 nodes, instead of 1", you may first think of whether the property DFS in the hdfs-site.xml configuration file. the replication configuration is incorrect, but this is not the case.
At this point, you need to view the startup log, my is located under/root/hadoop-0.20.0/logs, as shown below:
Hadoop-root-datanode-localhost.log hadoop-root-namenode-localhost.log hadoop-root-tasktracker-localhost.log
Hadoop-root-datanode-localhost.out hadoop-root-namenode-localhost.out hadoop-root-tasktracker-localhost.out
Hadoop-root-jobtracker-localhost.log hadoop-root-secondarynamenode-localhost.log history
Hadoop-root-jobtracker-localhost.out hadoop-root-secondarynamenode-localhost.out
View the hadoop-root-datanode-localhost.log log file and see exception information:
2010-08-02 15:38:34, 642 info org. Apache. hadoop. HDFS. server. datanode. datanode: startup_msg:
/*************************************** *********************
Startup_msg: Starting datanode
Startup_msg: host = localhost/127.0.0.1
Startup_msg: ARGs = []
Startup_msg: version = 0.20.0
Startup_msg: Build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20-r 763504; compiled by 'ndaline' on Thu APR 9 05:18:40 UTC 2009
**************************************** ********************/
2010-08-02 15:38:35, 381 error Org. apache. hadoop. HDFS. server. datanode. datanode: Java. io. ioexception: incompatible namespaceids in/tmp/hadoop-root/dfs/data: namenode namespaceid = 409052671; datanode namespaceid = 769845957
At org. Apache. hadoop. HDFS. server. datanode. datastorage. dotransition (datastorage. Java: 233)
At org. Apache. hadoop. HDFS. server. datanode. datastorage. recovertransitionread (datastorage. Java: 148)
At org. Apache. hadoop. HDFS. server. datanode. datanode. startdatanode (datanode. Java: 298)
At org. Apache. hadoop. HDFS. server. datanode. datanode. <init> (datanode. Java: 216)
At org. Apache. hadoop. HDFS. server. datanode. datanode. makeinstance (datanode. Java: 1283)
At org. Apache. hadoop. HDFS. server. datanode. datanode. instantiatedatanode (datanode. Java: 1238)
At org. Apache. hadoop. HDFS. server. datanode. datanode. createdatanode (datanode. Java: 1246)
At org. Apache. hadoop. HDFS. server. datanode. datanode. Main (datanode. Java: 1368)
2010-08-02 15:38:35, 382 info org. Apache. hadoop. HDFS. server. datanode. datanode: shutdown_msg:
/*************************************** *********************
Shutdown_msg: Shutting Down datanode at localhost/127.0.0.1
**************************************** ********************/
According to the above information, we can see from "incompatible namespaceids in/tmp/hadoop-root/dfs/data, the reason is that namespaceids in/tmp/hadoop-root/dfs/data is incompatible, that is, probably because the last time other versions of hadoop were run, there were residual incompatible data in the/tmp/hadoop-root/dfs/data directory. In fact this problem occurs during my run because I just tried the Hadoop-0.19.0 version of the run and didn't clean up the data after the run.
(2) Solution
After the data in the corresponding directory is cleared, the system can run normally. After various processes are started, you can run the JPS command to view the results as follows:
5386 jobtracker
5253 datanode
JPS 5529
4874 secondarynamenode
5489 tasktracker
4649 namenode
The above five processes are started. You can upload files to HDFS and execute the wordcount example.
This article from the Linux community website (www.linuxidc.com) original link: http://www.linuxidc.com/Linux/2010-08/27484p3.htm