Hadoop Datanode Reload failed to start resolution

Source: Internet
Author: User

The author uses a virtual machine based on a distributed Hadoop installation, because the order of shutdown Datanode and Namenode is not appropriate, so often appear datanode loading failure.

My solution is for the first time that the entire cluster has been successfully started, but the second failure due to an abnormal operation has not started properly. The first startup failure may have many causes: either due to a configuration file error write or due to an SSH password-free login configuration error.

The second reason for the error and the first start of some differences, the wrong focus should focus on the program in the operation of some dynamic loading and generated files, the author wants to discuss the second case:

Most of the reason is that the Namespaceid in the Datanode version of Hadoop is inconsistent with the Namespaceid in the version file in Namenode. And the Namespaceid of the generation of the author should be in the execution: HDFs namenode-format This command generated when.

The resolution steps are as follows:

1, first stop the related process on Namenode: switch to the/sbin directory of Hadoop:

SH stop-dfs.sh

SH stop-yarn.sh

2, switch to the corresponding/current directory of Hadoop to clear all files under current.

3, the Datanode and Namenode/current version of the corresponding file files cleared, back to Namenode, execute HSFS namenode-format command, and then switch to Namenode of Hadoop's/ Sbin directory:

Perform SH start-dfs.sh

SH start-yarn.sh

(The old version of Mapre is replaced by a new version of the yarn, and the commands are somewhat different.)

You can see that the corresponding node is loaded successfully.

The corresponding thought is, when the error, clear away all interference ideas of the document, and then tidy up the thoughts, start again, so far more than in situ wandering better.

(Because the folder we specify in the configuration file has only HDFs tmp log, the rest of the files and folders are dynamically executed by the script generation, and as long as the entire system of Hadoop can work, it will be generated, even if the error is deleted, the VM's snapshot will save the world.) )

Related Article

E-Commerce Solutions

Leverage the same tools powering the Alibaba Ecosystem

Learn more >

Apsara Conference 2019

The Rise of Data Intelligence, September 25th - 27th, Hangzhou, China

Learn more >

Alibaba Cloud Free Trial

Learn and experience the power of Alibaba Cloud with a free trial worth $300-1200 USD

Learn more >

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.