An exception occurred from the local upload to HDFs

Source: Internet
Author: User
Tags hdfs dfs

    • An exception occurred in HDFs dfs-put from local upload to HDFs
the Datanode error log information for the same machine as Namenode is as follows:     2015-12-03 09:54:03,083 warn org.apache.hadoop.hdfs.server.datanode.datanode:  Slow BlockReceiver write data to disk cost:727ms  (threshold=300ms) 2015-12-03 09:54:03,991 info org.apache.hadoop.hdfs.server.datanode.datanode: starting  CheckDiskError Thread2015-12-03 09:54:03,991 INFO  org.apache.hadoop.hdfs.server.datanode.datanode: exception for  Bp-254367353-10.172.153.46-1448878000030:blk_1073741847_1023java.io.ioexception: no space left  on device        at  Java.io.FileOutputStream.writeBytes (Native method)         at  java.io.fileoutputstream.write (fileoutputstream.java:345)          at org.apache.hadoop.hdfs.server.datanode.blockreceiver.receivepacket (BlockReceiver.java:613)           at org.apache.hadoop.hdfs.server.datanode.blockreceiver.receiveblock ( blockreceiver.java:781)         at  Org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock (dataxceiver.java:730)          at org.apache.hadoop.hdfs.protocol.datatransfer.receiver.opwriteblock ( receiver.java:137)         at  org.apache.hadoop.hdfs.protoco2015-12-03 09:54:04,050 warn  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.fsdatasetimpl: block  bp-254367353-10.172.153.46-1448878000030:blk_1073741847_1023 unfinalized and removed.  2015-12-03 09:54:04,054 info org.apache.hadoop.hdfs.server.datanode.datanode: opwriteblock  BP-254367353-10.172.153.46-1448878000030:blk_1073741847_1023 received exception  Java.io.ioexception: no space left on devIce2015-12-03 09:54:04,054 error org.apache.hadoop.hdfs.server.datanode.datanode: hd1 : 50010:dataxceiver error processing write_block operation  src: / 10.165.114.138:57315 dst: /10.172.153.46:50010java.io.ioexception: no space left  on device        at  Java.io.FileOutputStream.writeBytes (Native method)         at  java.io.fileoutputstream.write (fileoutputstream.java:345)          at org.apache.hadoop.hdfs.server.datanode.blockreceiver.receivepacket (BlockReceiver.java:613)          at  Org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock (blockreceiver.java:781)          at org.apache.hadoop.hdfs.server.datanode.dataxceiver.writeblock ( dataxceiver.java:730)   &nbsP;     at org.apache.hadoop.hdfs.protocol.datatransfer.receiver.opwriteblock ( receiver.java:137)         at  Org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp (receiver.java:74)          at org.apache.hadoop.hdfs.server.datanode.dataxceiver.run (DataXceiver.java:235)         at java.lang.thread.run (Thread.java:745)                               Datanode is located is as follows:  2015-12-03 17:54:04,111 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:  Datanoderegistration (10.172.218.18, datanodeuuid=7c882efa-f159-4477-a322-30cf55c84598, infoport= 50075, ipcport=50020, storageinfo=lv=-56;cid=cid-183048c9-89b2-44b4-a224-21f04d2a8065;nsid=275180848 ; c=0): Failed to transfer bp-254367353-10.172.153.46-1448878000030:blk_1073741850_1026 to  10.172.153.46:50010 got java.net.SocketException: Original Exception :  java.io.ioexception: connection reset by peer         at sun.nio.ch.filechannelimpl.transferto0 (Native method)          at sun.nio.ch.filechannelimpl.transfertodirectly (filechannelimpl.java:433)          at sun.nio.ch.filechannelimpl.transferto (FileChannelImpl.java:565)        &nbsP; at org.apache.hadoop.net.socketoutputstream.transfertofully (socketoutputstream.java:223)          at  Org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket (blocksender.java:559)          at org.apache.hadoop.hdfs.server.datanode.blocksender.sendblock (BlockSender.java : 728)         at org.apache.hadoop.hdfs.server.datanode.datanode$ Datatransfer.run (datanode.java:2017)         at  Java.lang.Thread.run (thread.java:745) caused by: java.io.ioexception: connection reset  by peer        ... 8 more2015-12-03  17:54:04,146 info org.apache.hadoop.hdfs.server.datanode.datanode: starting checkdiskerror  thread2015-12-03 17:57:39,288 info org.apache.hadoop.hdfs.server.datanode.blockpoolsliceScanner: verification succeeded for bp-254367353-10.172.153.46-1448878000030:blk_ 1073741850_1026  from the log can be seen in the lack of equipment, the server disk space is small, had to delete some junk data.     

An exception occurred from the local upload to HDFs

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.