Hadoop Shell commands
Use bin/hadoop FS <args> to call the File System (fs) Shell Command. All FS shell commands use the URI path as the parameter.
1. cat
Description: outputs the content of the specified file in the path to stdout.
Usage: hadoop fs-cat URI [URI…]
Example:
- hadoop fs -cat hdfs://host1:port1/file1 hdfs://host2:port2/file2
- hadoop fs -cat file:///file3/user/hadoop/file4
Return Value: 0 is returned for success, and-1 is returned for failure.
2. chgrp
Note: Change the group to which the file belongs. Use-R to recursively change the directory structure. The user of the command must be the owner or super user of the file.
Usage: hadoop fs-chgrp [-R] group uri [URI…]
Example:
- hadoop fs -chgrp -R hadoop /user/hadoop/
3. chmod
Note: Change the File Permission. Use-R to recursively change the directory structure. The user of the command must be the owner or super user of the file.
Usage: hadoop fs-chmod [-R] URI [URI…]
Example:
- hadoop fs -chmod -R 744 /user/hadoop/
4. chown
Note: Change the owner of a file. Use-R to recursively change the directory structure. The user of the command must be a Super User.
Usage: hadoop fs-chown [-R] [OWNER] [: [GROUP] URI [URI]
Example:
- hadoop fs -chmod -R hadoop /user/hadoop/
5. copyFromLocal (Local to hdfs)
Note: except that the source path is a local file, it is similar to the put command.
Usage: hadoop fs-copyFromLocal <localsrc> URI
6. copyToLocal (hdfs to local)
Note: except that the target path is a local file, it is similar to the get command.
Usage: hadoop fs-copyToLocal [-ignorecrc] [-crc] URI <localdst>
7. cp
Note: copy the file from the Source Path to the target path. This command allows multiple source paths. The target path must be a directory.
Usage: hadoop fs-cp URI [URI…] <Dest>
Example:
- hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2
- hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir
Return Value: 0 is returned for success, and-1 is returned for failure.
8. du
Note: The size of all files in the directory is displayed, or the file size is displayed when only one file is specified.
Usage: hadoop fs-du URI [URI…]
Example:
- hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://host:port/user/hadoop/dir1
View the size of all hbase files
Hadoop fs-du hdfs: // master: 54310/hbase
Return Value: 0 is returned for success, and-1 is returned for failure.
9. dus
Description: displays the file size.
Usage: hadoop fs-dus <args>
10. expunge
Note: Clear the recycle bin.
Usage: hadoop fs-expunge
11. get (hdfs to local)
Note: copy the file to the local file system. You can use the-ignorecrc option to copy files that failed CRC verification. Use the-crc option to copy the file and CRC information.
Usage: hadoop fs-get [-ignorecrc] [-crc] <src> <localdst>
Example:
- hadoop fs -get /user/hadoop/file localfile
-
- hadoop fs -get hdfs://host:port/user/hadoop/file localfile
-
Return Value: 0 is returned for success, and-1 is returned for failure.
12. getmerge
Note: accept a source directory and a target file as the input, and connect all files in the source directory to the destination file at the cost. Addnl is optional and is used to specify a line break at the end of each file.
Usage: hadoop fs-getmerge <src> <localdst> [addnl]
13. ls
Usage: hadoop fs-ls <args>
Note:
(1). If it is a file, the file information will be returned in the following format:
File name <Number of replicas> file size modification date modification time permission user ID group ID
(2) if it is a directory, a list of its direct sub-files will be returned, just like in Unix. The list returned by the directory is as follows:
Directory name <dir> modify date modify time permission user ID group ID
Example:
- hadoop fs -ls /user/hadoop/file1 /user/hadoop/file2 hdfs://host:port/user/hadoop/dir1 /nonexistentfile
Return Value: 0 is returned for success, and-1 is returned for failure.
14. lsr
Usage: hadoop fs-lsr <args>
Description: recursive version of the ls command. Similar to ls-R in Unix.
15. mkdir
Note: You can use the uri specified by the path as the parameter to create these directories. The behavior is similar to the mkdir-p of Unix. It creates parent directories of all levels in the path.
Usage: hadoop fs-mkdir <paths>
Example:
- hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2
- hadoop fs -mkdir hdfs://host1:port1/user/hadoop/dir hdfs://host2:port2/user/hadoop/dir
Return Value: 0 is returned for success, and-1 is returned for failure.
16. movefromLocal
Description: outputs a "not implemented" message.
Usage: dfs-moveFromLocal <src> <dst>
17. mv
Note: Move the file from the Source Path to the target path. This command allows multiple source paths. The target path must be a directory. Files cannot be moved between different file systems.
Usage: hadoop fs-mv URI [URI…] <Dest>
Example:
- hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2
- hadoop fs -mv hdfs://host:port/file1 hdfs://host:port/file2 hdfs://host:port/file3 hdfs://host:port/dir1
Return Value: 0 is returned for success, and-1 is returned for failure.
18. put
Note: copy one or more source paths from the local file system to the target file system. You can also read the input from the standard input and write it to the target file system.
Usage: hadoop fs-put <localsrc>... <Dst>
Example:
- hadoop fs -put localfile /user/hadoop/hadoopfile
- hadoop fs -put localfile1 localfile2 /user/hadoop/hadoopdir
- hadoop fs -put localfile hdfs://host:port/hadoop/hadoopfile
- hadoop fs -put – hdfs://host:port/hadoop/hadoopfile
Read the input from the standard input.
Return Value: 0 is returned for success, and-1 is returned for failure.
19. rm
Description: deletes a specified object. Delete only non-empty directories and files. Refer to the rmr command for Recursive deletion.
Usage: hadoop fs-rm URI [URI…]
Example:
- hadoop fs -rm hdfs://host:port/file /user/hadoop/emptydir
Return Value: 0 is returned for success, and-1 is returned for failure.
20. rmr
Description: recursive version of delete.
Usage: hadoop fs-rmr URI [URI…]
Example:
- hadoop fs -rmr /user/hadoop/dir
- hadoop fs -rmr hdfs://host:port/user/hadoop/dir
Return Value: 0 is returned for success, and-1 is returned for failure.
21. setrep
Note: Change the copy coefficient of a file. The-R option is used to recursively change the copy coefficient of all files in the directory.
Usage: hadoop fs-setrep [-R] <path>
Example:
- hadoop fs -setrep -w 3 -R /user/hadoop/dir1
Return Value: 0 is returned for success, and-1 is returned for failure.
22. stat
Returns the statistics of the specified path.
Usage: hadoop fs-stat URI [URI…]
Example:
- hadoop fs -stat path
-
Return Value: 0 is returned for success, and-1 is returned for failure.
23. tail
Usage: Output 1 kb of content at the end of the file to stdout. The-f option is supported, and the behavior is consistent with that in Unix.
Usage: hadoop fs-tail [-f] URI
Example:
- hadoop fs -tail pathname
Return Value: 0 is returned for success, and-1 is returned for failure.
24. test
Usage: hadoop fs-test-[ezd] URI
Option:
-E: Check whether the file exists. If yes, 0 is returned.
-Z: Check whether the file is 0 bytes. If yes, 0 is returned.
-D if the path is a directory, 1 is returned; otherwise, 0 is returned.
Example:
- hadoop fs -test -e filename
25. text
Note: The source file is output in text format. The allowed formats are zip and TextRecordInputStream.
Usage: hadoop fs-text <src>
26. touchz
Creates a zero-byte empty file.
Usage: hadoop fs-touchz URI [URI…]
Example:
- hadoop -touchz pathname
Return Value: 0 is returned for success, and-1 is returned for failure.