hadoop部署錯誤

來源:互聯網
上載者:User

hadoop的單機部署很簡單也不容易出錯,但是對生產環境的價值和意義不大,但是可以快速用於開發。

部署hadoop的錯誤原因不少,並且很奇怪。

比如,使用者名稱不同,造成用戶端和伺服器通訊產生認證失敗的錯誤,用戶端,伺服器各節點的使用者名稱應當是一致的,並且個節點應該建立ssh的無認證登陸。

一、出現下面錯誤:

13/07/09 13:57:07 INFO ipc.Client: Retrying connect to server: master/192.168.2.200:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

java.net.ConnectException: Call to master/192.168.2.200:9000 failed on connection exception: java.net.ConnectException: Connection refused        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1136)        at org.apache.hadoop.ipc.Client.call(Client.java:1112)        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)        at com.sun.proxy.$Proxy7.renewLease(Unknown Source)        at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:601)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)        at com.sun.proxy.$Proxy7.renewLease(Unknown Source)        at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:379)        at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:378)        at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:400)        at org.apache.hadoop.hdfs.LeaseRenewer.access$600(LeaseRenewer.java:69)        at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:273)        at java.lang.Thread.run(Thread.java:722)Caused by: java.net.ConnectException: Connection refused        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:719)        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:453)        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:579)        at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:202)        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1243)        at org.apache.hadoop.ipc.Client.call(Client.java:1087)        ... 14 more

是用戶端無法串連伺服器造成的,可能是伺服器沒有啟動或者啟動了防火牆。

二、出現下面錯誤:

13/07/09 13:57:36 ERROR hdfs.DFSClient: Failed to close file /tmp/web304069331.logorg.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/web304069331.log could only be replicated to 0 nodes, instead of 1        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:606)        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:415)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)        at org.apache.hadoop.ipc.Client.call(Client.java:1107)        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)        at com.sun.proxy.$Proxy7.addBlock(Unknown Source)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:601)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)        at com.sun.proxy.$Proxy7.addBlock(Unknown Source)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)

原因很多,namenode節點和datanode節點不能正常通訊造成的。根據查詢開機記錄看到datanode節點沒有辦法解析機器名造成的,所以修改/etc/hostname和/etc/hosts檔案。

三、出現一下錯誤:

13/07/09 13:59:01 INFO hdfs.DFSClient: Exception in createBlockOutputStream 192.168.2.201:50010 java.net.SocketTimeoutException: 63000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/192.168.2.201:50010]13/07/09 13:59:01 INFO hdfs.DFSClient: Abandoning blk_-6965665250189110825_167913/07/09 13:59:01 INFO hdfs.DFSClient: Excluding datanode 192.168.2.201:5001013/07/09 13:59:01 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/web465901718.log could only be replicated to 0 nodes, instead of 1        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:606)        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:415)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)        at org.apache.hadoop.ipc.Client.call(Client.java:1107)        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)        at com.sun.proxy.$Proxy7.addBlock(Unknown Source)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:601)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)        at com.sun.proxy.$Proxy7.addBlock(Unknown Source)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)13/07/09 13:59:01 WARN hdfs.DFSClient: Error Recovery for blk_-6965665250189110825_1679 bad datanode[0] nodes == null13/07/09 13:59:01 WARN hdfs.DFSClient: Could not get block locations. Source file "/tmp/web465901718.log" - Aborting...org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/web465901718.log could only be replicated to 0 nodes, instead of 1        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:606)        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:415)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)        at org.apache.hadoop.ipc.Client.call(Client.java:1107)        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)        at com.sun.proxy.$Proxy7.addBlock(Unknown Source)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:601)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)        at com.sun.proxy.$Proxy7.addBlock(Unknown Source)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)^C七月 09, 2013 2:00:46 下午 org.apache.coyote.AbstractProtocol pauseINFO: Pausing ProtocolHandler ["http-bio-8080"]七月 09, 2013 2:00:46 下午 org.apache.coyote.AbstractProtocol pauseINFO: Pausing ProtocolHandler ["ajp-bio-8009"]七月 09, 2013 2:00:46 下午 org.apache.catalina.core.StandardService stopInternalINFO: Stopping service Catalina13/07/09 14:00:46 ERROR hdfs.DFSClient: Failed to close file /tmp/web465901718.logorg.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/web465901718.log could only be replicated to 0 nodes, instead of 1        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:606)        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:415)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)        at org.apache.hadoop.ipc.Client.call(Client.java:1107)        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)        at com.sun.proxy.$Proxy7.addBlock(Unknown Source)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:601)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)        at com.sun.proxy.$Proxy7.addBlock(Unknown Source)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)七月 09, 2013 2:00:46 下午 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads

是因為用戶端與資料節點通訊失敗造成的。用戶端程式應該能夠和所有的節點通訊才能保證資料的傳輸正常。

四、hadoop namenode -format時出現:

Format aborted in /home/cnsworder/hadoop/name

刪除掉這個檔案夾重新格式化。

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.