標籤:hadoop docommission shell 自動化
介紹
之前我有篇博文是介紹如何用ansible的playbook來自動化Hadoop Decommission的,本文介紹用shell指令碼來實現。
指令碼都放在跳板機上,遠端伺服器調用本地指令碼完成所有操作,不需要拷貝檔案到遠端伺服器。
本文
主指令碼:decom.sh
#!/bin/bashiplist=/home/hadoop/shell_scripts/iplist#1. process iplist ,append them into exclude files # call append.shfor exclude_host in `cat $iplist` ; do ssh [email protected] "bash -s" < append.sh "$exclude_host" hdfs-exclude ssh [email protected] "bash -s" < append.sh "$exclude_host" mapred-exclude ssh [email protected] "bash -s" < append.sh "$exclude_host" hdfs-exclude ssh [email protected] "bash -s" < append.sh "$exclude_host" mapred-excludedone#2. ssh [email protected] "bash -s" < refreshnodes.shssh [email protected] "bash -s" < refreshnodes.sh#3. stop nodemanager and datanode service ,maybe regionserver service toofor client in `cat iplist`; do ssh [email protected]"${client}" "bash -s" < stopservice.shdone
分指令碼:append.sh
#!/bin/bashconf_dir=/opt/hadoop-2.6.0/etc/hadoop/backup_dir=/opt/hadoop-2.6.0/etc/hadoop/BAK/exclude_host=$1exclude_file=$2function usage() { echo -e "usage: $0 exclude file\nplease input the parameter ---- mapred-exclude or hdfs-exclude"}if [ $# -ne 2 ] ;then usage exit 1elif [ "$exclude_file" != "mapred-exclude" -a "$exclude_file" != "hdfs-exclude" ];then usage exit 1fi#if [ -d /apache/hadoop/conf ] ;then# cd /apache/hadoop/conf#else# echo "dir /apache/hadoop/conf doesnot exist , please check!"# exit 3#fi[ ! -d ${backup_dir} ] && mkdir ${backup_dir}# backup exclude file cp "${conf_dir}${exclude_file}" ${backup_dir}"${exclude_file}"-`date +%F.%H.%M.%S`# append hosts to exclude file grep ${exclude_host} "${conf_dir}${exclude_file}" >/dev/null 2>&1 retval=$? if [ $retval -ne 0 ];then echo ${exclude_host} >> "${conf_dir}${exclude_file}" else echo "duplicated host: ${exclude_host}" fi
分指令碼:refreshnodes.sh
#!/bin/bashhadoop_bin_dir=/opt/hadoop-2.6.0/bin/${hadoop_bin_dir}yarn rmadmin -refreshNodes 2>/dev/nullif [ $? -ne 0 ] ; then echo "command yarn rmadmin -refreshNodes Failed on $(hostname)!!!" exit 2fi# wait a while to let mapreduce can switch jobs to other nodes sleep 2${hadoop_bin_dir}hadoop dfsadmin -refreshNodes 2>/dev/nullif [ $? -ne 0 ] ; then echo "command hadoop dfsadmin -refreshNodes Failed on $(hostname)!!!" exit 3fi
分指令碼:stopservice.sh
#!/bin/bashhadoop_bin_dir=/opt/hadoop-2.6.0/sbin/#svc -d /service/nodemanager#svc -d /service/datanode${hadoop_bin_dir}yarn-daemon.sh stop nodemanager${hadoop_bin_dir}hadoop-daemon.sh stop datanode
檔案:iplist
10.9.214.16010.9.214.149
操作:
bash decom.sh
本文出自 “Linux和網路” 部落格,謝絕轉載!
自動實現Hadoop Decommission shell指令碼版