The Hadoop environment has been successfully built, and the various components of Hadoop are working properly. After restarting Hadoop a few times and discovering that Datanode does not work, open Hadoop's backend http://localhost:50030 and http://localhost:50070 found lives nodes to 0.
To view the log information for the startup Datanode:
Org.apache.hadoop.ipc.Client:Retryingconnect to server:uec-fe/16.157.63.10:9000. Already tried 0 time (s).
To view the log information for the startup Namenode:
Java.io.IOException:File Xxxxxxxxx/jobtracker.info could only being replicated to 0 nodes, instead of 1
After viewing the Hadoop.tmp.dir attribute is also configured in Core-site.xml:
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
<description>a base for other temporary directories.</description>
</property>
After the online search tried a lot of ways are useless, and later read an article originally is no permission problem
sudo chown-r hadoop/home/hadoop/tmp
Then stop the Hadoop service, reformat (general formatting will remind you to reformat the directory, indicate that the directory exists, delete the deleted directory, and then format it), and finally start the Hadoop service.
Summarize:
The idea of solving this kind of problem:
1> configuring Hadoop.tmp.dir properties;
2> Modify Permissions sudo chown-r hadoop/home/hadoop/tmp (value of Hadoop.tmp.dir)
Finally, there may be other reasons why Hadoop Datanode is not working and is accumulating.
Reprint Please specify source: http://blog.csdn.net/johnny901114/article/details/9624873