Original address: http://hadoop.apache.org/docs/r1.0.4/cn/hdfs_shell.html
- FS Shell
- Cat
- Chgrp
- chmod
- Chown
- Copyfromlocal
- Copytolocal
- Cp
- Du
- Dus
- Expunge
- Get
- Getmerge
- Ls
- Lsr
- Mkdir
- Movefromlocal
- Mv
- Put
- Rm
- RMr
- Setrep
- Stat
- Tail
- Test
- Text
- Touchz
FS Shell
The call file system (FS) shell command should use the form Bin/hadoop FS <args>. All of the FS shell commands use URI paths as parameters. The URI format is scheme://authority/path. For the HDFs file system, Scheme is HDFs, to the local file system, scheme is file. The scheme and authority parameters are optional, and if not specified, the default scheme specified in the configuration is used. An HDFs file or directory such as /parent/child can be represented as Hdfs://namenode:namenodeport/parent/child, or simpler /parent/ Child (assuming that the default value in your configuration file is namenode:namenodeport). The behavior of most FS shell commands is similar to that of the corresponding Unix shell commands, and the differences are noted below when the commands are used in detail. Error messages are output to stderr, and other information is output to stdout.
Cat
How to use: Hadoop fs-cat uri [uri ...]
Outputs the contents of the path-specified file to stdout.
Example:
- Hadoop fs-cat Hdfs://host1:port1/file1 Hdfs://host2:port2/file2
- Hadoop Fs-cat File:///file3/user/hadoop/file4
return value:
successful return 0, Failure returns-1.
Chgrp
How to use: Hadoop fs-chgrp [-R] GROUP uri [uri ...] Change Group Association of files. with-R, make the change recursively through the directory structure. The user must be the owner of files, or else a super-user. Additional information is in the Permissions User guide.-->
Change the group to which the file belongs. Using -R will make the changes recursive under the directory structure. The user of the command must be the owner or superuser of the file. For more information, see the HDFs Permissions User Guide.
chmod
How to use: Hadoop fs-chmod [-r] <mode[,mode] ... | octalmode> uri [uri ...]
Permissions to change the file. Using -R will make the changes recursive under the directory structure. The user of the command must be the owner or superuser of the file. For more information, see the HDFs Permissions User Guide.
Chown
How to use: Hadoop Fs-chown [-R] [Owner][:[group]] uri [URI]
Change the owner of the file. Using -R will make the changes recursive under the directory structure. The user of the command must be a superuser. For more information, see the HDFs Permissions User Guide.
Copyfromlocal
How to use: Hadoop fs-copyfromlocal <localsrc> URIs
In addition to qualifying the source path as a local file, it is similar to the put command.
Copytolocal
How to use: Hadoop fs-copytolocal [-IGNORECRC] [-CRC] URI <localdst>
In addition to qualifying the target path as a local file, it is similar to the get command.
Cp
How to use: Hadoop fs-cp uri [uri ...] <dest>
Copies the file from the source path to the destination path. This command allows for multiple source paths, at which point the destination path must be a directory.
Example:
- Hadoop fs-cp/user/hadoop/file1/user/hadoop/file2
- Hadoop Fs-cp/user/hadoop/file1/user/hadoop/file2/user/hadoop/dir
return value:
Successful return 0, Failure returns-1.
Du
How to use: Hadoop fs-du uri [uri ...]
Displays the size of all files in the directory, or when you specify only one file, the size of this file is displayed.
Example:
Hadoop fs-du/user/hadoop/dir1/user/hadoop/file1 Hdfs://host:port/user/hadoop/dir1
return value:
Successful return 0, Failure returns-1.
Dus
How to use: Hadoop fs-dus <args>
Displays the size of the file.
Expunge
How to use: Hadoop fs-expunge
Empty the Recycle Bin. Refer to the HDFs design documentation for more information about the properties of the Recycle Bin.
Get
How to use: Hadoop fs-get [-IGNORECRC] [-CRC] <src> <localdst>
Copy the file to the local file system. The-IGNORECRC option can be used to replicate the failed file for CRC validation. Use the -CRC option to copy files and CRC information.
Example:
- Hadoop fs-get/user/hadoop/file LocalFile
- Hadoop fs-get hdfs://host:port/user/hadoop/file LocalFile
return value:
Successful return 0, Failure returns-1.
Getmerge
How to use: Hadoop fs-getmerge <src> <localdst> [ADDNL]
Accepts a source directory and a destination file as input, and connects all the files in the source directory to the local destination file. ADDNL is optional and is used to specify that a line break is added at the end of each file.
Ls
How to use: Hadoop fs-ls <args>
If it is a file, the file information is returned in the following format:
File name < copy count > Size Modified Date Modify time permission User ID Group ID
If it is a directory, it returns a list of its immediate sub-files, as in Unix. The directory returns information for the list as follows:
Directory name <dir> Modified date Modify time permission User ID Group ID
Example:
Hadoop fs-ls/user/hadoop/file1/user/hadoop/file2 Hdfs://host:port/user/hadoop/dir1/nonexistentfile
return value:
Successful return 0, Failure returns-1.
Lsr
How to use: Hadoop FS-LSR <args>
the recursive version of the LS command. Similar to the Ls-r in Unix .
Mkdir
How to use: Hadoop fs-mkdir <paths>
Accept the URI specified by the path as a parameter to create these directories. It behaves like a Unix mkdir-p, which creates levels of parent directories in the path.
Example:
- Hadoop FS-MKDIR/USER/HADOOP/DIR1/USER/HADOOP/DIR2
- Hadoop Fs-mkdir Hdfs://host1:port1/user/hadoop/dir Hdfs://host2:port2/user/hadoop/dir
return value:
Successful return 0, Failure returns-1.
Movefromlocal
How to use: Dfs-movefromlocal <src> <dst>
Outputs a "not implemented" message.
Mv
How to use: Hadoop fs-mv uri [uri ...] <dest>
Moves the file from the source path to the destination path. This command allows for multiple source paths, at which point the destination path must be a directory. Files are not allowed to move between different file systems.
Example:
- Hadoop fs-mv/user/hadoop/file1/user/hadoop/file2
- Hadoop fs-mv hdfs://host:port/file1 hdfs://host:port/file2 hdfs://host:port/file3 hdfs://host:port/dir1
return value:
Successful return 0, Failure returns-1.
Put
How to use: Hadoop fs-put <localsrc> ... <dst>
Copy single or multiple source paths from the local file system to the target file system. Read input from standard input is also supported to write to the target file system.
- Hadoop fs-put Localfile/user/hadoop/hadoopfile
- Hadoop fs-put Localfile1 Localfile2/user/hadoop/hadoopdir
- Hadoop fs-put LocalFile Hdfs://host:port/hadoop/hadoopfile
- Hadoop fs-put-hdfs://host:port/hadoop/hadoopfile
Reads the input from the standard input.
return value:
Successful return 0, Failure returns-1.
Rm
How to use: Hadoop fs-rm uri [uri ...]
Deletes the specified file. Only non-empty directories and files are deleted. Refer to the RMR command for recursive deletions.
Example:
- Hadoop fs-rm Hdfs://host:port/file/user/hadoop/emptydir
return value:
Successful return 0, Failure returns-1.
RMr
How to use: Hadoop fs-rmr uri [uri ...]
The recursive version of Delete.
Example:
- Hadoop Fs-rmr/user/hadoop/dir
- Hadoop FS-RMR Hdfs://host:port/user/hadoop/dir
return value:
Successful return 0, Failure returns-1.
Setrep
How to use: Hadoop Fs-setrep [-R] <path>
Change the copy factor of a file. The-r option is used to recursively change the copy factor for all files in the directory.
Example:
- Hadoop fs-setrep-w 3-r/user/hadoop/dir1
return value:
Successful return 0, Failure returns-1.
Stat
How to use: Hadoop fs-stat uri [uri ...]
Returns the statistics for the specified path.
Example:
return value:
successful return 0, Failure returns-1.
Tail
How to use: Hadoop Fs-tail [-f] URI
Outputs the contents of the 1K bytes at the end of the file to stdout. The-f option is supported, and behaves the same as UNIX.
Example:
return value:
successful return 0, Failure returns-1.
Test
How to use: Hadoop fs-test-[ezd] URI
Options:
-e checks whether the file exists. Returns 0 if it exists.
-Z Checks if the file is 0 bytes. Returns 0 if it is.
-D returns 1 if the path is a directory, otherwise 0 is returned.
Example:
- Hadoop fs-test-e filename
Text
How to use: Hadoop fs-text <src>
Output the source file as text format. The allowed formats are zip and Textrecordinputstream.
Touchz
How to use: Hadoop fs-touchz uri [uri ...]
Create a 0-byte empty file.
Example:
return value:
successful return 0, Failure returns-1.
Hadoop shell command