Hadoop Shell commands

Source: Internet
Author: User
Tags hadoop fs

Hadoop Shell commands

Use bin/hadoop FS <args> to call the File System (fs) Shell Command. All FS shell commands use the URI path as the parameter.

1. cat

Description: outputs the content of the specified file in the path to stdout.

Usage: hadoop fs-cat URI [URI…]

Example:

 
 
  1. hadoop fs -cat hdfs://host1:port1/file1 hdfs://host2:port2/file2  
  2. hadoop fs -cat file:///file3/user/hadoop/file4 

Return Value: 0 is returned for success, and-1 is returned for failure.

2. chgrp

Note: Change the group to which the file belongs. Use-R to recursively change the directory structure. The user of the command must be the owner or super user of the file.

Usage: hadoop fs-chgrp [-R] group uri [URI…]

Example:

 
 
  1. hadoop fs -chgrp -R hadoop /user/hadoop/  

3. chmod

Note: Change the File Permission. Use-R to recursively change the directory structure. The user of the command must be the owner or super user of the file.

Usage: hadoop fs-chmod [-R] URI [URI…]

Example:

 
 
  1. hadoop fs -chmod -R 744 /user/hadoop/  

4. chown

Note: Change the owner of a file. Use-R to recursively change the directory structure. The user of the command must be a Super User.

Usage: hadoop fs-chown [-R] [OWNER] [: [GROUP] URI [URI]

Example:

 
 
  1. hadoop fs -chmod -R hadoop /user/hadoop/ 

5. copyFromLocal (Local to hdfs)

Note: except that the source path is a local file, it is similar to the put command.

Usage: hadoop fs-copyFromLocal <localsrc> URI

6. copyToLocal (hdfs to local)

Note: except that the target path is a local file, it is similar to the get command.

Usage: hadoop fs-copyToLocal [-ignorecrc] [-crc] URI <localdst>

7. cp

Note: copy the file from the Source Path to the target path. This command allows multiple source paths. The target path must be a directory.

Usage: hadoop fs-cp URI [URI…] <Dest>

Example:

 
 
  1. hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2  
  2. hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir 

Return Value: 0 is returned for success, and-1 is returned for failure.

8. du

Note: The size of all files in the directory is displayed, or the file size is displayed when only one file is specified.

Usage: hadoop fs-du URI [URI…]

Example:

 
 
  1. hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://host:port/user/hadoop/dir1 

View the size of all hbase files

Hadoop fs-du hdfs: // master: 54310/hbase

Return Value: 0 is returned for success, and-1 is returned for failure.

9. dus

Description: displays the file size.

Usage: hadoop fs-dus <args>

10. expunge

Note: Clear the recycle bin.

Usage: hadoop fs-expunge

11. get (hdfs to local)

Note: copy the file to the local file system. You can use the-ignorecrc option to copy files that failed CRC verification. Use the-crc option to copy the file and CRC information.

Usage: hadoop fs-get [-ignorecrc] [-crc] <src> <localdst>

Example:

 
 
  1. hadoop fs -get /user/hadoop/file localfile  
  2.  
  3. hadoop fs -get hdfs://host:port/user/hadoop/file localfile  
  4.  

Return Value: 0 is returned for success, and-1 is returned for failure.

12. getmerge

Note: accept a source directory and a target file as the input, and connect all files in the source directory to the destination file at the cost. Addnl is optional and is used to specify a line break at the end of each file.

Usage: hadoop fs-getmerge <src> <localdst> [addnl]

13. ls

Usage: hadoop fs-ls <args>

Note:

(1). If it is a file, the file information will be returned in the following format:

File name <Number of replicas> file size modification date modification time permission user ID group ID

(2) if it is a directory, a list of its direct sub-files will be returned, just like in Unix. The list returned by the directory is as follows:

Directory name <dir> modify date modify time permission user ID group ID

Example:

 
 
  1. hadoop fs -ls /user/hadoop/file1 /user/hadoop/file2 hdfs://host:port/user/hadoop/dir1 /nonexistentfile  

Return Value: 0 is returned for success, and-1 is returned for failure.

14. lsr

Usage: hadoop fs-lsr <args>

Description: recursive version of the ls command. Similar to ls-R in Unix.

15. mkdir

Note: You can use the uri specified by the path as the parameter to create these directories. The behavior is similar to the mkdir-p of Unix. It creates parent directories of all levels in the path.

Usage: hadoop fs-mkdir <paths>

Example:

 
 
  1. hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2  
  2. hadoop fs -mkdir hdfs://host1:port1/user/hadoop/dir hdfs://host2:port2/user/hadoop/dir 

Return Value: 0 is returned for success, and-1 is returned for failure.

16. movefromLocal

Description: outputs a "not implemented" message.

Usage: dfs-moveFromLocal <src> <dst>

17. mv

Note: Move the file from the Source Path to the target path. This command allows multiple source paths. The target path must be a directory. Files cannot be moved between different file systems.

Usage: hadoop fs-mv URI [URI…] <Dest>

Example:

 
 
  1. hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2  
  2. hadoop fs -mv hdfs://host:port/file1 hdfs://host:port/file2 hdfs://host:port/file3 hdfs://host:port/dir1 

Return Value: 0 is returned for success, and-1 is returned for failure.

18. put

Note: copy one or more source paths from the local file system to the target file system. You can also read the input from the standard input and write it to the target file system.

Usage: hadoop fs-put <localsrc>... <Dst>

Example:

 
 
  1. hadoop fs -put localfile /user/hadoop/hadoopfile  
  2. hadoop fs -put localfile1 localfile2 /user/hadoop/hadoopdir  
  3. hadoop fs -put localfile hdfs://host:port/hadoop/hadoopfile  
  4. hadoop fs -put – hdfs://host:port/hadoop/hadoopfile 

Read the input from the standard input.

Return Value: 0 is returned for success, and-1 is returned for failure.

19. rm

Description: deletes a specified object. Delete only non-empty directories and files. Refer to the rmr command for Recursive deletion.

Usage: hadoop fs-rm URI [URI…]

Example:

 
 
  1. hadoop fs -rm hdfs://host:port/file /user/hadoop/emptydir 

Return Value: 0 is returned for success, and-1 is returned for failure.

20. rmr

Description: recursive version of delete.

Usage: hadoop fs-rmr URI [URI…]

Example:

 
 
  1. hadoop fs -rmr /user/hadoop/dir  
  2. hadoop fs -rmr hdfs://host:port/user/hadoop/dir 

Return Value: 0 is returned for success, and-1 is returned for failure.

21. setrep

Note: Change the copy coefficient of a file. The-R option is used to recursively change the copy coefficient of all files in the directory.

Usage: hadoop fs-setrep [-R] <path>

Example:

 
 
  1. hadoop fs -setrep -w 3 -R /user/hadoop/dir1 

Return Value: 0 is returned for success, and-1 is returned for failure.

22. stat

Returns the statistics of the specified path.

Usage: hadoop fs-stat URI [URI…]

Example:

 
 
  1. hadoop fs -stat path  
  2.  

Return Value: 0 is returned for success, and-1 is returned for failure.

23. tail

Usage: Output 1 kb of content at the end of the file to stdout. The-f option is supported, and the behavior is consistent with that in Unix.

Usage: hadoop fs-tail [-f] URI

Example:

 
 
  1. hadoop fs -tail pathname 

Return Value: 0 is returned for success, and-1 is returned for failure.

24. test

Usage: hadoop fs-test-[ezd] URI

Option:

-E: Check whether the file exists. If yes, 0 is returned.

-Z: Check whether the file is 0 bytes. If yes, 0 is returned.

-D if the path is a directory, 1 is returned; otherwise, 0 is returned.

Example:

 
 
  1. hadoop fs -test -e filename 

25. text

Note: The source file is output in text format. The allowed formats are zip and TextRecordInputStream.

Usage: hadoop fs-text <src>

26. touchz

Creates a zero-byte empty file.

Usage: hadoop fs-touchz URI [URI…]

Example:

 
 
  1. hadoop -touchz pathname 

Return Value: 0 is returned for success, and-1 is returned for failure.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.