Hadoop entry: Summary of hadoop shell commands

Source: Internet
Author: User

Part 1: hadoop Bin
The following hadoop bin is based on the actual needs of the project:
Hadoop Shell
Hadoop-config.sh, which is used to assign values to some variables
Hadoop_home (hadoop installation directory ).
Hadoop_conf_dir (hadoop configuration file directory ). Hadoop_slaves (-- the address of the file specified by hosts)
Hadoop-daemon.sh Single Node startup
Hadoop-daemons.sh start slaves. SH and hadoop-daemon.sh
Start-all.sh starts HDFS and mapreduce
Start-balancer.sh starts hadoop Load Balancing
Start-dfs.sh start HDFS
Start-jobhistoryserver.sh
Start-mapred.sh starts mapreduce
.
The following hadoop bin is based on the actual needs of the project:
Hadoop Shell
Hadoop-config.sh, which is used to assign values to some variables
Hadoop_home (hadoop installation directory ).
Hadoop_conf_dir (hadoop configuration file directory ).
Hadoop_slaves (-- the address of the file specified by hosts)
Hadoop-daemon.sh Single Node startup
The hadoop-daemons.sh runs the same script hadoop-daemon.sh on all slaves
Start-all.sh starts HDFS and mapreduce
Start-balancer.sh starts hadoop Load Balancing
Start-dfs.sh start HDFS
Start-jobhistoryserver.sh
Start-mapred.sh starts mapreduce
.
Stop-all.sh stop HDFS and mapreduce
Stop-balancer.sh stops Load Balancing
Stop-dfs.sh stop HDFS
Stop-jobhistoryserver.sh stop job Tracing
Stop-mapred.sh stops mapreduce
Task-Controller
Part 2: Basic hadoop shell operations

Nhadoop Shell
Including:

  1. Namenode-format the DFS filesystem
  2. Secondarynamenode run the DFS secondary namenode
  3. Namenode run the DFS namenode
  4. Datanode run a DFS datanode
  5. Dfsadmin run a DFS Admin Client
  6. Mradmin run a map-Reduce Admin Client
  7. Fsck run a DFS filesystem checking Utility
  8. FS run a generic filesystem user Client
  9. Balancer run a cluster balancing Utility
  10. Fetchdt fetch delegation token from the namenode
  11. Jobtracker run the mapreduce job tracker Node
  12. Pipes run a pipes job
  13. Tasktracker run a mapreduce task tracker Node
  14. Historyserver run job history servers as a standalone daemon
  15. Job manipulate mapreduce jobs
  16. Queue get information regarding jobqueues
  17. Version print the version
  18. Jar <jar> run a JAR File
  19. Distcp <srcurl> <desturl> copy file or directories recursively
  20. Archive-archivename name-P <parent path> <SRC> * <DEST> Create a hadoop Archive
  21. Classpath prints the class path needed to get
  22. Hadoop jar and the required Libraries
  23. Daemonlog get/set the log level for each daemon
  24. Or
  25. Classname run the class named classname
Copy code

Recommended articles:
Hadoop shell command Dictionary (available for favorites)

Article transferred from: http://www.aboutyun.com/thread-6742-1-1.html

 

Hadoop entry: Summary of hadoop shell commands

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.