Hadoop uploading files to HDFs error

Source: Internet
Author: User
Tags hadoop fs

by command:

Hadoop FS  -put  /opt/program/userall20140828  hdfs://localhost:9000/tmp/tvbox/

Uploading files to HDFs is an error


14/12/11 17:57:49 WARN HDFs. Dfsclient:datastreamer exception:org.apache.hadoop.ipc.remoteexception:java.io.ioexception:file/tmp/tvbox/ Behavior_20141210.log could only being replicated to 0 nodes, instead of 1 at ORG.APACHE.HADOOP.HDFS.SERVER.NAMENODE.F Snamesystem.getadditionalblock (fsnamesystem.java:1271) at Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:422) purpose at Sun.reflect.GeneratedMethodAccessor9.invoke (Unknown Source) at Sun.reflect.DelegatingMethodAccessorImpl.invoke ( delegatingmethodaccessorimpl.java:43) at Java.lang.reflect.Method.invoke (method.java:606) at Org.apache.hado Op.ipc.rpc$server.call (rpc.java:508) at Org.apache.hadoop.ipc.server$handler$1.run (Server.java:959) at ORG.A Pache.hadoop.ipc.server$handler$1.run (server.java:955) at java.security.AccessController.doPrivileged (Native Method) at Javax.security.auth.Subject.doAs (subject.java:415) at Org.apache.hadoop.ipc.sErver$handler.run (server.java:953) at Org.apache.hadoop.ipc.Client.call (client.java:740) at Org.apache.hadoo P.ipc.rpc$invoker.invoke (rpc.java:220) at Com.sun.proxy. $Proxy 0.addBlock (Unknown Source) at Sun.reflect.Nati VEMETHODACCESSORIMPL.INVOKE0 (Native Method) at Sun.reflect.NativeMethodAccessorImpl.invoke ( nativemethodaccessorimpl.java:57) at Sun.reflect.DelegatingMethodAccessorImpl.invoke ( delegatingmethodaccessorimpl.java:43) at Java.lang.reflect.Method.invoke (method.java:606) at Org.apache.hado Op.io.retry.RetryInvocationHandler.invokeMethod (retryinvocationhandler.java:82) at Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (retryinvocationhandler.java:59) at com.sun.proxy.$ Proxy0.addblock (Unknown Source) at Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.locatefollowingblock ( dfsclient.java:2937) at Org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.nextblockoutputstream (DFSClient.java      : 2819)  At org.apache.hadoop.hdfs.dfsclient$dfsoutputstream.access$2000 (dfsclient.java:2102) at Org.apache.hadoop.hdfs.DF Sclient$dfsoutputstream$datastreamer.run (dfsclient.java:2288) 14/12/11 17:57:49 WARN HDFs. Dfsclient:error Recovery for block null bad datanode[0] nodes = = NULL14/12/11 17:57:49 WARN HDFs. Dfsclient:could not get block locations. Source file "/tmp/tvbox/behavior_20141210.log"-aborting...put:java.io.ioexception:file/tmp/tvbox/behavior_ 20141210.log could only is replicated to 0 nodes, instead of 114/12/11 17:57:49 ERROR HDFs. Dfsclient:exception closing file/tmp/tvbox/behavior_20141210.log:org.apache.hadoop.ipc.remoteexception: Java.io.ioexception:file/tmp/tvbox/behavior_20141210.log could only being replicated to 0 nodes, instead of 1 at org . Apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:1271) at Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:422) at Sun.reflect.GeneratedMethodAcCessor9.invoke (Unknown Source) at Sun.reflect.DelegatingMethodAccessorImpl.invoke ( delegatingmethodaccessorimpl.java:43) at Java.lang.reflect.Method.invoke (method.java:606) at Org.apache.hado Op.ipc.rpc$server.call (rpc.java:508) at Org.apache.hadoop.ipc.server$handler$1.run (Server.java:959) at ORG.A Pache.hadoop.ipc.server$handler$1.run (server.java:955) at java.security.AccessController.doPrivileged (Native Method) at Javax.security.auth.Subject.doAs (subject.java:415) at Org.apache.hadoop.ipc.server$handler.run (Se rver.java:953) Org.apache.hadoop.ipc.remoteexception:java.io.ioexception:file/tmp/tvbox/behavior_20141210.log Could replicated to 0 nodes, instead of 1 at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditio        Nalblock (fsnamesystem.java:1271) at Org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock (namenode.java:422)       At Sun.reflect.GeneratedMethodAccessor9.invoke (Unknown Source) At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43) at Java.lang.reflect.Met Hod.invoke (method.java:606) at Org.apache.hadoop.ipc.rpc$server.call (rpc.java:508) at ORG.APACHE.HADOOP.IPC. Server$handler$1.run (server.java:959) at Org.apache.hadoop.ipc.server$handler$1.run (Server.java:955) at Java . Security. Accesscontroller.doprivileged (Native Method) at Javax.security.auth.Subject.doAs (subject.java:415) at Org.ap  Ache.hadoop.ipc.server$handler.run (server.java:953) at Org.apache.hadoop.ipc.Client.call (client.java:740) at  Org.apache.hadoop.ipc.rpc$invoker.invoke (rpc.java:220) at Com.sun.proxy. $Proxy 0.addBlock (Unknown Source) at SUN.REFLECT.NATIVEMETHODACCESSORIMPL.INVOKE0 (Native Method) at Sun.reflect.NativeMethodAccessorImpl.invoke (Native methodaccessorimpl.java:57) at Sun.reflect.DelegatingMethodAccessorImpl.invoke (Delegatingmethodaccessorimpl.java : +) at Java. Lang.reflect.Method.invoke (method.java:606) at Org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod ( retryinvocationhandler.java:82) at Org.apache.hadoop.io.retry.RetryInvocationHandler.invoke ( retryinvocationhandler.java:59) at Com.sun.proxy. $Proxy 0.addBlock (Unknown Source) at Org.apache.hadoop.hdfs. Dfsclient$dfsoutputstream.locatefollowingblock (dfsclient.java:2937) at org.apache.hadoop.hdfs.dfsclient$ Dfsoutputstream.nextblockoutputstream (dfsclient.java:2819) at org.apache.hadoop.hdfs.dfsclient$ dfsoutputstream.access$2000 (dfsclient.java:2102) at org.apache.hadoop.hdfs.dfsclient$dfsoutputstream$ Datastreamer.run (dfsclient.java:2288)

Online search for a lap, probably caused by the following reasons:

1. Firewall problem ( exclude )

View iptables Status:

Serviceiptables status

Iptables boot automatically:

Open: Chkconfigiptables on

OFF: Chkconfigiptables off

Iptables Shutdown Service:

Open: Service Iptablesstart

Close: Service iptables stop


2, the reason for adding nodes, that is, you need to start Namenode, then start Datanode, and then start Jobtracker and Tasktracker ( exclude )

1. Restart Namenode

# hadoop-daemon.sh Start Namenode

Starting Namenode, logging to/usr/hadoop-0.21.0/bin/../logs/hadoop-root-namenode-www.keli.com.out

2. Restart Datanode

# hadoop-daemon.sh Start Datanode

Starting Datanode, logging to/usr/hadoop-0.21.0/bin/../logs/hadoop-root-datanode-www.keli.com.out

The boot order in the start-all.sh is right.

3, disk space problem (hit!!! )

Steps to resolve:

1. View space usage by command Df–ah,

[Email protected] hadoop]# df-ahfilesystem            Size  used Avail use% mounted on/dev/sda2              18G   15G  12m< c5/>100%/proc                     0     0     0   -  /procsysfs                    0     0     0   -  /sysdevpts                   0     0     0   -  /dev/ptstmpfs                 937M  224K  937M   1%/dev/shm/dev/sda1             291M   37M  240M  14%/bootnone                     0     0     0   -  /proc/sys/fs/binfmt_misc.host:/               196G  209M  196G   1%/mnt/hgfsvmware-vmblock           0     0     0   -  /var/run/ Vmblock-fusegvfs-fuse-daemon         0     0     0   -  /ROOT/.GVFS

2. Backup clears the log in Hadoop/logs

Check the space usage again, upload the file again, OK, success!


About emptying the size of the space, after emptying the logs, or using 15G, there should be other places to continue to empty, welcome advice!


[Email protected] hadoop]# df-ahfilesystem            Size  used Avail use% mounted on/dev/sda2              18G   15G  2.1G  88%/proc                     0     0     0   -  /procsysfs                    0     0     0   -  /sysdevpts                   0     0     0   -  /dev/ptstmpfs                 937M  224K  937M   1%/dev/shm/dev/sda1             291M   37M  240M  14%/bootnone                     0     0     0   -  /proc/sys/fs/binfmt_misc.host:/               196G  209M  196G   1%/mnt/hgfsvmware-vmblock           0     0     0   -  /var/run/ Vmblock-fusegvfs-fuse-daemon         0     0     0   -  /ROOT/.GVFS



Hadoop uploading files to HDFs error

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.