Part 1: hadoop Bin
The following hadoop bin is based on the actual needs of the project:
Hadoop Shell
Hadoop-config.sh, which is used to assign values to some variables
Hadoop_home (hadoop installation directory ).
Hadoop_conf_dir (hadoop configuration file directory ). Hadoop_slaves (-- the address of the file specified by hosts)
Hadoop-daemon.sh Single Node startup
Hadoop-daemons.sh start slaves. SH and hadoop-daemon.sh
Start-all.sh starts HDFS and mapreduce
Start-balancer.sh starts hadoop Load Balancing
Start-dfs.sh start HDFS
Start-jobhistoryserver.sh
Start-mapred.sh starts mapreduce
.
The following hadoop bin is based on the actual needs of the project:
Hadoop Shell
Hadoop-config.sh, which is used to assign values to some variables
Hadoop_home (hadoop installation directory ).
Hadoop_conf_dir (hadoop configuration file directory ).
Hadoop_slaves (-- the address of the file specified by hosts)
Hadoop-daemon.sh Single Node startup
The hadoop-daemons.sh runs the same script hadoop-daemon.sh on all slaves
Start-all.sh starts HDFS and mapreduce
Start-balancer.sh starts hadoop Load Balancing
Start-dfs.sh start HDFS
Start-jobhistoryserver.sh
Start-mapred.sh starts mapreduce
.
Stop-all.sh stop HDFS and mapreduce
Stop-balancer.sh stops Load Balancing
Stop-dfs.sh stop HDFS
Stop-jobhistoryserver.sh stop job Tracing
Stop-mapred.sh stops mapreduce
Task-Controller
Part 2: Basic hadoop shell operations
Nhadoop Shell
Including:
- Namenode-format the DFS filesystem
- Secondarynamenode run the DFS secondary namenode
- Namenode run the DFS namenode
- Datanode run a DFS datanode
- Dfsadmin run a DFS Admin Client
- Mradmin run a map-Reduce Admin Client
- Fsck run a DFS filesystem checking Utility
- FS run a generic filesystem user Client
- Balancer run a cluster balancing Utility
- Fetchdt fetch delegation token from the namenode
- Jobtracker run the mapreduce job tracker Node
- Pipes run a pipes job
- Tasktracker run a mapreduce task tracker Node
- Historyserver run job history servers as a standalone daemon
- Job manipulate mapreduce jobs
- Queue get information regarding jobqueues
- Version print the version
- Jar <jar> run a JAR File
- Distcp <srcurl> <desturl> copy file or directories recursively
- Archive-archivename name-P <parent path> <SRC> * <DEST> Create a hadoop Archive
- Classpath prints the class path needed to get
- Hadoop jar and the required Libraries
- Daemonlog get/set the log level for each daemon
- Or
- Classname run the class named classname
Copy code
Recommended articles:
Hadoop shell command Dictionary (available for favorites)
Article transferred from: http://www.aboutyun.com/thread-6742-1-1.html
Hadoop entry: Summary of hadoop shell commands