All datanode operations in the hadoop cluster are unfavorable (solution)
Datanode cannot be started only in the following situations.
1. First, modify the configuration file of the master,
2. Bad habits of hadoop namenode-format for multiple times.
Generally, an error occurs:
Java. io. IOException: Cannot lock storage/usr/hadoop/tmp/dfs/name. The directory is already locked.
Or:
[Root @ hadoop current] # hadoop-daemon.sh start datanode
Starting datanode, logging to/usr/local/hadoop1.1/libexec/../logs/hadoop-root-datanode-hadoop.out
[Root @ hadoop ~] # Jps
No datanode started by jps command
In this case, try the following:
Enter the following command on the necrotic node:
Bin/Hadoop-daemon.sh start DataNode
Bin/Hadoop-daemon.sh start jobtracker
If not, congratulations on your situation.
The correct solution is to go to each of your Slave and find.../usr/hadoop/tmp/dfs/-ls
Data is displayed.
Delete the data folder. Then start hadoop directly under the directory just now
Start-all.sh
View jps
Then datanode will appear.
Next, let's see.
Http: // 210.41.166.61 (IP address of your master): 50070
How many dynamic nodes are there?
Http: // 210.41.166.61 (ip address of your master): 50030/
The number of nodes displayed.
OK. Solve the problem.