Hadoop cannot be started properly (1)
Failed to start after executing $ bin/hadoop start-all.sh.
Exception 1
Exception in thread "Main" Java. Lang. illegalargumentexception: Invalid URI for namenode address (check fs. defaultfs): file: // has no authority.
Localhost: At org. Apache. hadoop. HDFS. server. namenode. namenode. getaddress (namenode. Java: 214)
Localhost: At org. Apache. hadoop. HDFS. server. namenode. secondarynamenode. initialize (secondarynamenode. Java: 135)
Localhost: At org. Apache. hadoop. HDFS. server. namenode. secondarynamenode. <init> (secondarynamenode. Java: 119)
Localhost: At org. Apache. hadoop. HDFS. server. namenode. secondarynamenode. Main (secondarynamenode. Java: 481)
Work und: at this point there is no configuration of CONF/mapred-site.xml. On Version 0.21.0 is the configuration of mapred-site.xml, in a previous version is the configuration of core-site.xml, 0.20.2 is not configured with mapred-site.xml, only core-site.xml files can be configured
<Configuration>
<Property>
<Name> fs. Default. Name </Name>
<Value> HDFS :/// localhost: 9000 </value>
</Property>
<Property>
<Name> mapred. Job. Tracker </Name>
<Value> HDFS :/// localhost: 9001 </value>
</Property>
<Property>
<Name> DFS. Replication </Name>
<Value> 1 </value>
</Property>
</Configuration>
Hadoop cannot be started normally (2)
Exception 2,
Starting namenode, logging to/home/xixitie/hadoop/bin/../logs/hadoop-root-namenode-aist.out
Localhost: Starting datanode, logging to/home/xixitie/hadoop/bin/../logs/hadoop-root-datanode-aist.out
Localhost: Starting secondarynamenode, logging to/home/xixitie/hadoop/bin/../logs/hadoop-root-secondarynamenode-aist.out
Localhost: exception in thread "Main" Java. Lang. nullpointerexception
Localhost: At org.apache.hadoop.net. netutils. createsocketaddr (netutils. Java: 134)
Localhost: At org. Apache. hadoop. HDFS. server. namenode. namenode. getaddress (namenode. Java: 156)
Localhost: At org. Apache. hadoop. HDFS. server. namenode. namenode. getaddress (namenode. Java: 160)
Localhost: At org. Apache. hadoop. HDFS. server. namenode. secondarynamenode. initialize (secondarynamenode. Java: 131)
Localhost: At org. Apache. hadoop. HDFS. server. namenode. secondarynamenode. <init> (secondarynamenode. Java: 115)
Localhost: At org. Apache. hadoop. HDFS. server. namenode. secondarynamenode. Main (secondarynamenode. Java: 469)
Starting jobtracker, logging to/home/xixitie/hadoop/bin/../logs/hadoop-root-jobtracker-aist.out
Localhost: Starting tasktracker, logging to/home/xixitie/hadoop/bin/../logs/hadoop-root-tasktracker-aist.out
Work und: at this point there is no configuration of CONF/mapred-site.xml. On Version 0.21.0 is the configuration of mapred-site.xml, in a previous version is the configuration of core-site.xml, 0.20.2 is not configured with mapred-site.xml, only core-site.xml files can be configured
<Configuration>
<Property>
<Name> fs. Default. Name </Name>
<Value> HDFS :/// localhost: 9000 </value>
</Property>
<Property>
<Name> mapred. Job. Tracker </Name>
<Value> HDFS :/// localhost: 9001 </value>
</Property>
<Property>
<Name> DFS. Replication </Name>
<Value> 1 </value>
</Property>
</Configuration>
Hadoop cannot be started normally (3)
Exception 3,
Starting namenode, logging to/home/xixitie/hadoop/bin/../logs/hadoop-root-namenode-aist.out
Localhost: Starting datanode, logging to/home/xixitie/hadoop/bin/../logs/hadoop-root-datanode-aist.out
Localhost: Error: java_home is not set.
Localhost: Starting secondarynamenode, logging to/home/xixitie/hadoop/bin/../logs/hadoop-root-secondarynamenode-aist.out
Localhost: Error: java_home is not set.
Starting jobtracker, logging to/home/xixitie/hadoop/bin/../logs/hadoop-root-jobtracker-aist.out
Localhost: Starting tasktracker, logging to/home/xixitie/hadoop/bin/../logs/hadoop-root-tasktracker-aist.out
Localhost: Error: java_home is not set.
Solution:
Configure JDK environment variables in $ hadoop/CONF/hadoop-env.sh File
Java_home =/home/xixitie/JDK
Classpath = $ java_home/lib/dt. jar: $ java_home/lib/tools. Jar
Export java_home classpath
Hadoop cannot be started properly (4)
Exception 4: Use HDFS: // localhost: 9001 in mapred-site.xml configuration, instead of localhost: 9001 Configuration
The exception information is as follows:
11/04/20 23:33:25 Info Security. groups: group mapping impl = org. Apache. hadoop. Sec urity. shellbasedunixgroupsmapping; cachetimeout = 300000
11/04/20 23:33:25 warn fs. filesystem: "localhost: 9000" is a deprecated file1_e M name. Use "HDFS: // localhost: 9000/" instead.
11/04/20 23:33:25 warn Conf. Configuration: mapred. task. ID is deprecated. Instead, use mapreduce. task. attempt. ID
11/04/20 23:33:25 warn fs. filesystem: "localhost: 9000" is a deprecated file1_e M name. Use "HDFS: // localhost: 9000/" instead.
11/04/20 23:33:25 warn fs. filesystem: "localhost: 9000" is a deprecated file1_e M name. Use "HDFS: // localhost: 9000/" instead.
Solution:
Use HDFS: // localhost: 9000 IN mapred-site.xml configuration, instead of localhost: 9000 Configuration
<Property>
<Name> fs. Default. Name </Name>
<Value> HDFS :/// localhost: 9000 </value>
</Property>
<Property>
<Name> mapred. Job. Tracker </Name>
<Value> HDFS :/// localhost: 9001 </value>
</Property>
Hadoop cannot be started properly (5)
Exception 5: Solution to the No namenode to stop problem:
The exception information is as follows:
11/04/20 21:48:50 info IPC. Client: retrying connect to server: localhost/127.0.0. 1: 9000. Already tried 0 time (s ).
11/04/20 21:48:51 info IPC. Client: retrying connect to server: localhost/127.0.0. 1: 9000. Already tried 1 time (s ).
11/04/20 21:48:52 info IPC. Client: retrying connect to server: localhost/127.0.0. 1: 9000. Already tried 2 time (s ).
11/04/20 21:48:53 info IPC. Client: retrying connect to server: localhost/127.0.0. 1: 9000. Already tried 3 time (s ).
11/04/20 21:48:54 info IPC. Client: retrying connect to server: localhost/127.0.0. 1: 9000. Already tried 4 time (s ).
11/04/20 21:48:55 info IPC. Client: retrying connect to server: localhost/127.0.0. 1: 9000. Already tried 5 time (s ).
11/04/20 21:48:56 info IPC. Client: retrying connect to server: localhost/127.0.0. 1: 9000. Already tried 6 time (s ).
11/04/20 21:48:57 info IPC. Client: retrying connect to server: localhost/127.0.0. 1: 9000. Already tried 7 time (s ).
11/04/20 21:48:58 info IPC. Client: retrying connect to server: localhost/127.0.0. 1: 9000. Already tried 8 time (s ).
Solution:
This problem occurs when namenode is not started. Why is no namenode to stop? Some data may affect namenode,
You need to execute:
$ Bin/hadoop namenode-format
Then
$ Bin/hadoop start-all.sh
Hadoop cannot be started normally (6)
Exception 5: Solution to the No datanode to stop problem:
Sometimes data structure problems may cause the problem that datanode cannot be started.
After formatting with hadoop namenode-format, the file in/tmp is not clear.
In fact, files in/tmp/hadoop * need to be cleared.
Procedure:
1. Delete hadoop: // TMP first
Hadoop FS-RMR/tmp
Ii. Stop hadoop
Stop-all.sh
3. delete/tmp/hadoop *
Rm-RF/tmp/hadoop *
Iv. Format hadoop
Hadoop namenode-format
V. Start hadoop
Start-all.sh
Then you can solve the problem that the datanode cannot be started.
++
Due to the maintenance needs of the machine server, a server in the hadoop cluster is required to stop the service, so I went to the server to stop the hadoop datanode and tasktracker, and run the following command:
Bin/hadoop-daemon.sh stop datanode
Output:
No datanode to stop
However, after checking the process, we found that both datanode and tasktracker are still running and tried the same result several times. Finally, I tried to stop using the namenode command:
Bin/stop_dfs.sh
Or output:
No datanode to stop
Instead, we had to use brute force to kill the-9 process.
After killing the hadoop process, bin/hadoop-daemon.sh can be used again normally. I wonder if other hadoop users have encountered this problem ??
However, the problem cannot be solved. I checked the information online and did not find satisfactory results. No way. Check the code yourself!
After reading the hadoop-daemon.sh code, I found that the script stops the hadoop service through the PID file, and my cluster configuration is using the default configuration, the PID file is located under the/tmp directory, so I compared the process ID in the hadoop PID file under the/tmp directory and the process ID found by PS ax, and found that the two process IDs are inconsistent, and finally found the root cause of the problem.
Hurry up and update the hadoop configuration!
Modify hadoop_pid_dir = hadoop installation path in hadoop-env.sh
Then, create the corresponding PID file in the hadoop installation path based on the PID of the cluster hadoop process:
Hadoop-hadoop running username-datanode. PID
Hadoop-hadoop running username-tasktracker. PID
Hadoop-hadoop run user name-namenode. PID
Hadoop-hadoop running username-jobtracker. PID
Hadoop-hadoop run username-secondarynamenode. PID