At the time, Hadoop was installed successfully, but Secondarynamenode did not start
Later, after research, there is a problem with the configured directory
First modify the shell file
File path:/home/work/hadoop/bin
Original:Master now:secondarynamenode
[email protected] bin]$ cat start-dfs.sh
#!/usr/bin/env Bash
# Licensed to the Apache software Foundation (ASF) under one or more
# Contributor license agreements. See the NOTICE file distributed with
# This work for additional information regarding copyright ownership.
# The ASF licenses this file to you under the Apache License, Version 2.0
# (the "License"); Except in compliance with
# The License. Obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# unless required by applicable or agreed to writing, software
# Distributed under the License is distributed on a "as is" BASIS,
# without warranties or CONDITIONS of any KIND, either express or implied.
# See the License for the specific language governing permissions and
# Limitations under the License.
# Start Hadoop Dfs daemons.
# optinally Upgrade or Rollback DFS state.
# Run this on master node.
Usage= "usage:start-dfs.sh [-upgrade|-rollback]"
bin= ' DirName ' "
Bin= ' CD "$bin"; PWD '
If [-E ' $bin/: /libexec/hadoop-config.sh "]; Then
. "$bin"/.. /libexec/hadoop-config.sh
Else
. "$bin/hadoop-config.sh"
Fi
# Get arguments
If [$#-ge 1]; Then
Namestartopt=$1
Shift
Case $nameStartOpt in
(-upgrade)
;;
(-rollback)
datastartopt= $nameStartOpt
;;
(*)
Echo $usage
Exit 1
;;
Esac
Fi
# Start Dfs daemons
# start Namenode after datanodes, to minimize time namenode are up w/o data
# note:datanodes would log connection errors until Namenode starts
"$bin"/hadoop-daemon.sh--config $HADOOP _conf_dir start Namenode $nameStartOpt
"$bin"/hadoop-daemons.sh--config $HADOOP _conf_dir start Datanode $dataStartOpt
"$bin"/hadoop-daemons.sh--config $HADOOP _conf_dir--hostsSecondarynamenodeStart Secondarynamenode
[Email protected] bin]$
The stopped shell file also needs to be modified:
Original:Master now:secondarynamenode
[email protected] bin]$ cat start-dfs.sh
#!/usr/bin/env Bash
# Licensed to the Apache software Foundation (ASF) under one or more
# Contributor license agreements. See the NOTICE file distributed with
# This work for additional information regarding copyright ownership.
# The ASF licenses this file to you under the Apache License, Version 2.0
# (the "License"); Except in compliance with
# The License. Obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# unless required by applicable or agreed to writing, software
# Distributed under the License is distributed on a "as is" BASIS,
# without warranties or CONDITIONS of any KIND, either express or implied.
# See the License for the specific language governing permissions and
# Limitations under the License.
# Start Hadoop Dfs daemons.
# optinally Upgrade or Rollback DFS state.
# Run this on master node.
Usage= "usage:start-dfs.sh [-upgrade|-rollback]"
bin= ' DirName ' "
Bin= ' CD "$bin"; PWD '
If [-E ' $bin/: /libexec/hadoop-config.sh "]; Then
. "$bin"/.. /libexec/hadoop-config.sh
Else
. "$bin/hadoop-config.sh"
Fi
# Get arguments
If [$#-ge 1]; Then
Namestartopt=$1
Shift
Case $nameStartOpt in
(-upgrade)
;;
(-rollback)
datastartopt= $nameStartOpt
;;
(*)
Echo $usage
Exit 1
;;
Esac
Fi
# Start Dfs daemons
# start Namenode after datanodes, to minimize time namenode are up w/o data
# note:datanodes would log connection errors until Namenode starts
"$bin"/hadoop-daemon.sh--config $HADOOP _conf_dir start Namenode $nameStartOpt
"$bin"/hadoop-daemons.sh--config $HADOOP _conf_dir start Datanode $dataStartOpt
"$bin"/hadoop-daemons.sh--config $HADOOP _conf_dir--hosts secondarynamenode start Secondarynamenode
[email protected] bin]$ cat stop-dfs.sh
#!/usr/bin/env Bash
# Licensed to the Apache software Foundation (ASF) under one or more
# Contributor license agreements. See the NOTICE file distributed with
# This work for additional information regarding copyright ownership.
# The ASF licenses this file to you under the Apache License, Version 2.0
# (the "License"); Except in compliance with
# The License. Obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# unless required by applicable or agreed to writing, software
# Distributed under the License is distributed on a "as is" BASIS,
# without warranties or CONDITIONS of any KIND, either express or implied.
# See the License for the specific language governing permissions and
# Limitations under the License.
# Stop Hadoop DFS daemons. Run this on master node.
bin= ' DirName ' "
Bin= ' CD "$bin"; PWD '
If [-E ' $bin/: /libexec/hadoop-config.sh "]; Then
. "$bin"/.. /libexec/hadoop-config.sh
Else
. "$bin/hadoop-config.sh"
Fi
"$bin"/hadoop-daemon.sh--config $HADOOP _conf_dir Stop Namenode
"$bin"/hadoop-daemons.sh--config $HADOOP _conf_dir Stop Datanode
"$bin"/hadoop-daemons.sh--config $HADOOP _conf_dir--hostsSecondarynamenodeStop Secondarynamenode
[Email protected] bin]$
2nd Modify the contents of the file
File path:/home/work/hadoop/conf
The slave node turns out to be: Slaves inside has node1 node
Only Node2 and Node3 are modified.
[email protected] conf]$ cat Slaves
Node2
Node3
[Email protected] conf]$
In addition, a Secondarynamenode file is appended
Put Node1 in there alone.
[email protected] conf]$ cat Secondarynamenode
Node1
[Email protected] conf]$
To this location: The Secondarynamenode configuration was successful.
Now I post the result state of the successful run:
First I was 1 Master,secondarynamenode is Node1, Node2 and Node3 is Datanode.
Master case:
[Email protected] conf]$ JPS
13338 NameNode
13884 Jps
13554 Jobtracker
[Email protected] conf]$
Node1 Situation:
[Email protected] ~]$ JPS
9772 Secondarynamenode
10071 Jps
[Email protected] ~]$
Node2 situation:
[Email protected] ~]$ JPS
22897 Tasktracker
22767 DataNode
23234 Jps
[Email protected] ~]$
Node3 situation:
[Email protected] ~]$ JPS
3457 Tasktracker
3327 DataNode
3806 Jps
[Email protected] ~]$
How to add secondarynamenode nodes in Hadoop