HDFs remote Connection Hadoop problem and solution

Source: Internet
Author: User
Tags file size port number
Questions:Using HDFS client to locally connect to Hadoop deployed on Alibaba Cloud server, an exception occurred during operation of HDFs: could only is replicated to 0 nodes instead of minreplication (=1). There is 1 Datanode (s) running and 1 node (s) is excluded in this operation. And, on the Administration Web page to view the file file size is all 0;
Reason:Baidu looked for a long time did not find the answer is said Datanode did not start, but the server on the JPS view Datanode has been normal start, think not its solution, later in StackOverflow found a paragraph: It took me a week to figure out The problem in my situation. When the client (your program) ask the NameNode for data operation, the NameNode picks up a dataNode and navigate the Clien T to it, by giving the DataNode ' s IP to the client. But when the DataNode host was configured to have multiple IP, and the NameNode gives you the one your client CAN ' T ACCESS To, the client would add the DataNode to exclude list and ask the NameNode for a new one, and finally all DataNode is exc luded, you get the This error. So check node ' s IP settings before try everything!!! The general meaning is: The client Operation HDFs when the first connection Namenode, and then Namenode assigned to the client a Datanoe IP address, if the IP address client can not access, will be added to the exclusion list by the client. And my Alibaba cloud server is a multi-IP address, so assigned to me an unreachable address, thus the problem occurred; Solution:When you run the client program on a Hadoop server, the error disappears. The node for viewing datanode on the Hadoop governance page is IZ234NVOLHDZ (10.253.102.93:50010), My local 10.253.102.93 address can ping, and the Telnet port number 50010 does not pass, it is the port number of the issue of the 50010 port number is open after the local can be accessed. Reference:Https://stackoverflow.com/questions/5293446/hdfs-error-could-only-be-replicated-to-0-nodes-instead-of-1/HTTP Blog.csdn.net/cht0112/article/details/72911307?utm_source=itdadao&utm_medium=referral

Questions:Error Org.apache.hadoop.hdfs.server.datanode.datanode:iz234nvolhdz:50010:dataxceiver Error processing WRITE_BLOCK Operation Src:/10.253.102.93:59704 DST:/10.253.102.93:50010 java.io.IOException:Premature EOF from InputStream Reason:Client uploads file stream operation forgot to switch off Solution:Output stream shutdown and refresh Outputstream.flush (); Outputstream.close (); Reference:http://blog.csdn.net/menghuannvxia/article/details/44591619

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.