Calling the file system (FS) shell command should use the form of Bin/hadoop FS <args>. All of the FS shell commands use the URI path as a parameter. The URI format is Scheme://authority/path. For the HDFs file system, Scheme is HDFS, for the local file system, scheme is file. The scheme and authority parameters are optional, and if not specified, the default scheme specified in the configuration is used. A HDFs file or directory such as/parent/child can be represented as hdfs://http://www.aliyun.com/zixun/aggregation/11696.html ">namenode: Namenodeport/parent/child, or simpler/parent/child (assuming that the default value in your configuration file is Namenode:namenodeport). The behavior of most FS shell commands is similar to that of the corresponding Unix shell commands, and the differences are indicated below when the commands are used for details. The error message is output to stderr, and other information is exported to stdout.
Cat
How to use: Hadoop fs-cat uri [uri ...]
The path specifies the contents of the file to output to stdout.
Example:
Hadoop fs-cat Hdfs://host1:port1/file1 Hdfs://host2:port2/file2
Hadoop Fs-cat File:///file3/user/hadoop/file4
return value:
Successfully returns 0, failure returns-1.
Chgrp
How to use: Hadoop fs-chgrp [-R] GROUP uri [uri ...] Change Group Association of files. With-r, make the change recursively through the directory businessesflat-out. The user moment-in be the owner of files, or else a super-user. Additional information is in the Permissions User Guide. -->
Change the group to which the file belongs. Using-R causes changes to be recursive in the directory structure. The user of the command must be the owner of the file or the superuser. For more information, see the HDFs Permissions User's Guide.
chmod
How to use: Hadoop fs-chmod [-r] <mode[,mode] ... | octalmode> uri [uri ...]
Change the permissions on the file. Using-R causes changes to be recursive in the directory structure. The user of the command must be the owner of the file or the superuser. For more information, see the HDFs Permissions User's Guide.
Chown
How to use: Hadoop Fs-chown [-R] [Owner][:[group]] uri [URI]
Change the owner of the file. Using-R causes changes to be recursive in the directory structure. The user of the command must be a superuser. For more information, see the HDFs Permissions User's Guide.
Copyfromlocal
How to use: Hadoop fs-copyfromlocal <localsrc> URI
In addition to qualifying the source path is a local file, it is similar to the put command.
Copytolocal
How to use: Hadoop fs-copytolocal [-IGNORECRC] [-CRC] URI <localdst>
In addition to qualifying the target path is a local file, it is similar to the Get command.
cp
How to use: Hadoop fs-cp uri [uri ...] <dest>
Copies the file from the source path to the target path. This command allows multiple source paths, at which point the destination path must be a directory.
Example:
Hadoop fs-cp/user/hadoop/file1/user/hadoop/file2
Hadoop Fs-cp/user/hadoop/file1/user/hadoop/file2/user/hadoop/dir
return value:
Successfully returns 0, failure returns-1.
Du
How to use: Hadoop fs-du uri [uri ...]
Displays the size of all files in the directory, or the size of the file when only one file is specified.
Example:
Hadoop fs-du/user/hadoop/dir1/user/hadoop/file1 Hdfs://host:port/user/hadoop/dir1
return value:
Successfully returns 0, failure returns-1.
Dus
How to use: Hadoop fs-dus <args>
Displays the size of the file.
Expunge
How to use: Hadoop fs-expunge
Empty the Recycle Bin. Refer to the HDFs design documentation for more information about the Recycle Bin features.
Get
How to use: Hadoop fs-get [-IGNORECRC] [-CRC] <src> <localdst>
Copy the file to the local file system. You can copy a CRC failed file with the-IGNORECRC option. Use the-CRC option to copy files and CRC information.
Example:
Hadoop fs-get/user/hadoop/file LocalFile
Hadoop fs-get hdfs://host:port/user/hadoop/file LocalFile
return value:
Successfully returns 0, failure returns-1.
Getmerge
How to use: Hadoop fs-getmerge <src> <localdst> [ADDNL]
Accepts a source directory and a target file as input, and connects all files in the source directory to the local destination file. ADDNL is optional and is used to specify that a newline character be added at the end of each file.
ls
How to use: Hadoop fs-ls <args>
If it is a file, the file information is returned in the following format:
File name < number of copies > file size modification Date Modify time rights User ID group ID
If it is a directory, it returns a list of its immediate subfolders, as in Unix. The information for the catalog return list is as follows:
Directory name <dir> Modify date Modify time rights User ID group ID
Example:
Hadoop fs-ls/user/hadoop/file1/user/hadoop/file2 Hdfs://host:port/user/hadoop/dir1/nonexistentfile
return value:
Successfully returns 0, failure returns-1.
Lsr
How to use: Hadoop FS-LSR <args>
Recursive version of the LS command. Similar to Ls-r in Unix.
mkdir
How to use: Hadoop fs-mkdir <paths>
Takes the URI specified by the path as a parameter and creates the directories. The behavior is similar to that of Unix mkdir-p, which creates levels of parent directories in the path.
Example:
Hadoop FS-MKDIR/USER/HADOOP/DIR1/USER/HADOOP/DIR2
Hadoop Fs-mkdir Hdfs://host1:port1/user/hadoop/dir Hdfs://host2:port2/user/hadoop/dir
return value:
Successfully returns 0, failure returns-1.
Movefromlocal
How to use: Dfs-movefromlocal <src> <dst>
Output a "not implemented" message.
mv
How to use: Hadoop fs-mv uri [uri ...] <dest>
Moves the file from the source path to the target path. This command allows multiple source paths, at which point the destination path must be a directory. Moving files between different file systems is not allowed.
Example:
Hadoop fs-mv/user/hadoop/file1/user/hadoop/file2
Hadoop fs-mv hdfs://host:port/file1 hdfs://host:port/file2 hdfs://host:port/file3 hdfs://host:port/dir1
return value:
Successfully returns 0, failure returns-1.
Put
How to use: Hadoop fs-put <localsrc> ... <dst>
Copies single or multiple source paths to the destination file system from the local file system. Read input from standard input is also supported to write to the target file system.
Hadoop fs-put Localfile/user/hadoop/hadoopfile
Hadoop fs-put Localfile1 Localfile2/user/hadoop/hadoopdir
Hadoop fs-put LocalFile Hdfs://host:port/hadoop/hadoopfile
Hadoop fs-put-hdfs://host:port/hadoop/hadoopfile
Reads input from standard input.
return value:
Successfully returns 0, failure returns-1.
Rm
How to use: Hadoop fs-rm uri [uri ...]
Deletes the specified file. Delete only non-empty directories and files. Refer to the RMR command for recursive deletion.
Example:
Hadoop fs-rm Hdfs://host:port/file/user/hadoop/emptydir
return value:
Successfully returns 0, failure returns-1.
RMr
How to use: Hadoop fs-rmr uri [uri ...]
The recursive version of Delete.
Example:
Hadoop Fs-rmr/user/hadoop/dir
Hadoop FS-RMR Hdfs://host:port/user/hadoop/dir
return value:
Successfully returns 0, failure returns-1.
Setrep
How to use: Hadoop Fs-setrep [-R] <path>
Change the copy factor of a file. The-r option is used to recursively change the copy coefficients of all files in the directory.
Example:
Hadoop fs-setrep-w 3-r/user/hadoop/dir1
return value:
Successfully returns 0, failure returns-1.
Stat
How to use: Hadoop fs-stat uri [uri ...]
Returns statistics for the specified path.
Example:
Hadoop fs-stat Path
return value:
Successfully returns 0, failure returns-1.
Tail
How to use: Hadoop Fs-tail [-f] URI
Outputs 1K bytes of file tail to stdout. The-f option is supported, and the behavior is consistent with UNIX.
Example:
Hadoop Fs-tail Pathname
return value:
Successfully returns 0, failure returns-1.
Test
How to use: Hadoop fs-test-[ezd] URI
Options:
-e checks for file existence. Returns 0 if present.
-Z checks to see if the file is 0 bytes. If yes, return 0.
-D If the path is a directory, return 1 or 0.
Example: Hadoop fs-test-e filename
Text
How to use: Hadoop fs-text <src>
Output the source file in text format. The allowed formats are zip and Textrecordinputstream.
Touchz
How to use: Hadoop fs-touchz uri [uri ...]
Creates a 0-byte empty file.
Example:
Hadoop-touchz pathname
return value:
Successfully returns 0, failure returns-1.