Error:
Java.io.IOException:Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being Availab Le to try
Reason:
Unable to write; there are 3 datanode in my environment, and the number of backups is set to 3. During the write operation, it writes 3 machines in a pipeline. The default is Replace-datanode-on-failure.policy, and if the system has a datanode greater than or equal to 3, it will find another datanode to copy. Currently there are only 3 machines, so as long as a datanode problem, it has been unable to write successfully.
Workaround:
Modify the Hdfs-site.xml file to add or modify the following two items:
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name> <value>true</value >
</property>
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>NEVER</value>
</property>
For dfs.client.block.write.replace-datanode-on-failure.enable, whether the client uses a replacement policy when writing fails, the default is true no problem
For Dfs.client.block.write.replace-datanode-on-failure.policy,default, when the backup is 3 or more, it tries to change the node to try to write to Datanode. While in two backup, do not change datanode, start writing directly. For a cluster of 3 datanode, it can be turned off as long as one node does not respond to the write.
If you think reading this blog gives you something to gain, you might want to click " recommend " in the lower right corner.
If you want to find my new blog more easily, click on " Follow me " in the lower left corner.
If you are interested in what my blog is talking about, please keep following my follow-up blog, I am " Liu Chao ★ljc".
This article is copyright to the author and the blog Park, Welcome to reprint, but without the consent of the author must retain this paragraph, and in the article page obvious location to the original link, otherwise reserves the right to pursue legal responsibility.
Hadoop error java.io.IOException Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being Available to try