Cluster remote replication (distributed by Namenode)
---------------------------------------
#! /bin/bashif [$#-lt 1]; Thenecho no argsexit; Fi#get first argumentarg1=$1;cuser= ' WhoAmI ' fname= ' basename $arg 1 ' dir= ' dirname $arg 1 ' If ["$dir" = " ." ] ; Thendir= ' pwd ' fifor ((i=200;i<=500;i=i+100));d Oecho--------coping $arg 1 to $i---------, if [-D $arg 1]; Thenscp-r $arg 1 [email protected] $i: $DIRELSESCP $arg 1 [email protected] $i: $dirfiechodone
Cluster remote View list (completed by Namenode)
------------------------------------------
#! /bin/bashif [$#-lt 1]; Thenecho no argsexit; Fi#get first argumentarg1=$1;cuser= ' WhoAmI ' fname= ' basename $arg 1 ' dir= ' dirname $arg 1 ' If ["$dir" = " ." ] ; Thendir= ' pwd ' fifor ((i=200;i<=500;i=i+100));d oecho--------ls $arg 1 $i---------, if [-D $arg 1]; THENSSH s$i ls $dir/$fname | Xargselsessh s$i ls $dir | Xargsfiechodone
The cluster remotely deletes files or folders (completed by Namenode)
--------------------------------------------
#! /bin/bashif [$#-lt 1]; Thenecho no argsexit; Fi#get first argumentarg1=$1;cuser= ' WhoAmI ' fname= ' basename $arg 1 ' dir= ' dirname $arg 1 ' If ["$dir" = " ." ] ; Thendir= ' pwd ' fifor ((i=200;i<=500;i=i+100));d Oecho--------rm $arg 1 in s$i---------; if [-D $arg 1]; Thenssh s$i rm-rf $dir/$fnameecho okelsessh s$i rm $dir/$fnameecho Okfiechodone
Cluster One-click to open and complete Namenode format, create user directory (completed by Namenode)
-----------------------------------------------------------------------------------
#!/bin/bashecho "--------------- NOW FORMAT HDFS ------------" Hdfs namenode -formatecho "--------------- HDFS FORMAT ALREADY -------------" echo "----- ---------- NOW START HDFS --------------"start-dfs.shecho "--------------- hdfs START ALREADY --------------"echo "--------------- NOW START YARN system -------------"start-yarn.shecho "--------------- YARN SYSTEM START already -------------"echo "--------------- NOW CREAT USER DIRECTORY ------ -------"hadoop fs -mkdir -p /user/yehom/dataecho "--------------- USER diredctory created already -----------"echo "--------------- SHOW USER direrectory list --------------"hadoop fs -ls -r /echo " *************** all start and ini ******************* "echo " *************** design by yehom @YehomLab. com * ****************"
Cluster one-click Shutdown and delete all logs and related directories (completed by Namenode)
#!/bin/bashecho "-------------------now STOP HADOOP CLUSTER--------------------" Stop-yarn.shstop-dfs.shecho "------ -------------HADOOP CLUSTER STOP already--------------------"echo"-------------------now DELETE DATA File------------------------"xrm.sh ~/hadoop-yehomecho"-------------------DATA FILE DELETED Already--------------------"echo"-------------------now DELETE LOGS---------------------"xrm.sh/soft/hadoop/ Logsecho "-------------------LOGS DELETED already----------------" echo "******************* all HADOOP TASK STOP and REFRESH ***************** "echo" ******************* DESIGN by Yehom @YehomLab. com ***************** "
This article is from the "Yehomlab" blog, make sure to keep this source http://yehom.blog.51cto.com/5159116/1793049
"DAY2" a shell script that needs to be used in full Hadoop distribution mode