Hadoop basic operation commands

Source: Internet
Author: User
Tags hadoop fs

Start Hadoop

Start-all.sh

Disable HADOOP

Stop-all.sh

View the file list

View the files in the/user/admin/aaron directory of hdfs.

Hadoop fs-ls/user/admin/aaron

List all files (including files in subdirectories) in the/user/admin/aaron directory of hdfs ).

Hadoop fs-lsr/user/admin/aaron

Create a file directory

Hadoop fs-mkdir/user/admin/aaron/newDir

Delete an object

Delete the needDelete file under the/user/admin/aaron directory in hdfs

Hadoop fs-rm/user/admin/aaron/needDelete

Delete the/user/admin/aaron directory in hdfs and all files in the directory

Hadoop fs-rmr/user/admin/aaron

Upload files

Hadoop fs-put/home/admin/newFile/user/admin/aaron/

Download files

Hadoop fs-get/user/admin/aaron/newFile/home/admin/newFile

View files

Hadoop fs-cat/home/admin/newFile

Create an empty file

Hadoop fs-touchz/user/new.txt

Rename a file on hadoop

Hadoop fs-mv/user/test.txt/user/OK .txt

Saves all content in the specified directory of hadoop as a file and goes down to the local directory.

Hadoop dfs-getmerge/user/home/t

Submit a MAPREDUCE JOB

H bin/hadoop jar/home/admin/hadoop/job. jar [jobMainClass] [jobArgs]

Killing a running JOB

Hadoop job-kill job_20100531_37_0053

More HADOOP commands

Hadoop

You can see the description of more commands:

Namenode-format the DFS filesystem

Secondarynamenode run the DFS secondary namenode

Namenode run the DFS namenode

Datanode run a DFS datanode

Dfsadmin run a DFS admin client

Fsck run a DFS filesystem checking utility

Fs run a generic filesystem user client

Balancer run a cluster balancing utility

Jobtracker run the MapReduce job Tracker node

Pipes run a Pipes job

Tasktracker run a MapReduce task Tracker node

Job manipulate MapReduce jobs

Queue get information regarding JobQueues

Version print the version

Jar <jar> run a jar file

Distcp <srcurl> <desturl> copy file or directories recursively

Archive-archiveName NAME <src> * <dest> create a hadoop archive

Daemonlog get/set the log level for each daemon

Or

CLASSNAME run the class named CLASSNAME

Most commands print help when invoked w/o parameters.

Note:

1. List all commands supported by Hadoop Shell

$ Bin/hadoop fs-help

2. display detailed information about a command

$ Bin/hadoop fs-help command-name

3. You can use the following command to view the historical log summary in the specified path to display the job details, failure and termination task details.

$ Bin/hadoop job-history output-dir

4. For more details about the job, such as the successful task and the number of attempts made to each task, run the following command.

$ Bin/hadoop job-history all output-dir

5. Format a New Distributed File System

$ Bin/hadoop namenode-format

6. On the assigned NameNode, run the following command to start HDFS and start the DataNode daemon on all listed slave instances.

$ Bin/start-dfs.sh

7. On the assigned JobTracker, run the following command to start Map/Reduce.

$ Bin/start-mapred.sh

8. On the assigned NameNode, run the following command to stop HDFS:

$ Bin/stop-dfs.sh

9. On the assigned JobTracker, run the following command to stop Map/Reduce:

$ Bin/stop-mapred.sh

 


DFSShell

10. Create a directory named/foodir.

$ Bin/hadoop dfs-mkdir/foodir

11. Create a directory named/foodir

$ Bin/hadoop dfs-mkdir/foodir

12. view the file content named/foodir/myfile.txt.

$ Bin/hadoop dfs-cat/foodir/myfile.txt

 


DFSAdmin

13. Place the cluster in Security Mode

$ Bin/hadoop dfsadmin-safemode enter

14. display the Datanode list

$ Bin/hadoop dfsadmin-report

15. Retire Datanode node datanodename

$ Bin/hadoop dfsadmin-decommission datanodename

16. The bin/hadoop dfsadmin-help command can list all currently supported commands. For example:

*-Report: Reports basic HDFS statistics. Some information can be seen on the NameNode Web Service Homepage.

*-Safemode: Generally, this mode is not required. The administrator can manually enter or exit the security mode.

*-FinalizeUpgrade: deletes the cluster backup created during the last upgrade.

17. explicitly place HDFS in Security Mode

$ Bin/hadoop dfsadmin-safemode

18. Before the upgrade, the administrator needs to use the (upgrade termination operation) command to delete the existing backup files.

$ Bin/hadoop dfsadmin-finalizeUpgrade

19. You need to know whether to perform the upgrade termination operation on a cluster.

$ Dfsadmin-upgradeProgress status

20. Use the-upgrade option to run the new version.

$ Bin/start-dfs.sh-upgrade

21. To return to the old version, you must stop the cluster and deploy Hadoop of the old version. Use the rollback option to start the cluster.

$ Bin/start-dfs.h-rollback

22. The following new commands or options are used to support quotas. The first two are administrator commands.

* Dfsadmin-setquota <N> <directory>... <directory>

Set the quota of each directory to N. This command will try on each directory. If N is not a positive long integer, the directory does not exist or the file name, or the directory exceeds the quota, an error report will be generated.

* Dfsadmin-clrquota <directory>... <director>

Delete a quota for each directory. This command will try on each directory. If the directory does not exist or is a file, an error report will be generated. If no quota is set in the directory, no error is returned.

* Fs-count-q <directory>... <directory>

The-q option is used to report the quota set for each directory and the remaining quota. If no quota is set for the directory, none and inf are reported.

23. Create a hadoop archive file

$ Hadoop archive-archiveName NAME <src> * <dest>

-ArchiveName NAME: NAME of the file to be created.

The path name of the src file system, which is the same as the regular expression.

The target directory where dest saves the file.

24. recursively copy files or directories

$ Hadoop distcp <srcurl> <desturl>

Srcurl source Url

Desturl target Url

 


25. Run the HDFS File System Check Tool (fsck tools)
Usage: hadoopfsck [GENERIC_OPTIONS] <path> [-move |-delete |-openforwrite] [-files [-blocks [-locations |-racks]

  • 1
  • 2
  • Next Page

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.