1,hadoop Fs–fs [local | <file system uri>]: Declares the filesystem used by Hadoop, if not declared, is configured using the current profile, and looks in the following order: Hadoop The hadoop-default.xml-> in the jar $HADOOP the Hadoop-site.xml under the _conf_dir under the hadoop-default.xml-> $HADOOP _conf_dir. Use the local file system as the DFS for Hadoop. If the URI is passed as a parameter, then the specific file system is the DFS.
2,hadoop fs–ls <path>: ls, equivalent to the local system, lists the contents of the file in the specified directory and supports pattern matching. The output format, such as filename (full path), <r n> size. where n represents the number of replica, and size is the scale (unit bytes).
3,hadoop FS–LSR <path>: Recursively lists file information that matches the pattern, similar to LS, but recursively lists all subdirectory information.
4,hadoop Fs–du <path>: Lists the total amount of file system space (in units bytes) that matches the pattern, equivalent to the DU–SB <path>/* for directories under UNIX and du–b <path for files >, output format such as name (full path) size (in bytes).
5,hadoop Fs–dus <path>: Equivalent to-du, the output format is the same, but equivalent to the Unix DU-SB.
6,hadoop fs–mv <src> <dst>: Move the formatted file to the specified target location. When SRC is multiple files, DST must be a directory.
7,hadoop FS–CP <src> <dst>: Copy files to the target location, DST must be a directory when SRC is multiple files.
8,hadoop fs–rm [-skiptrash] <src>: Deletes the specified file that matches the pattern, equivalent to RM <src> under UNIX.
9,hadoop FS–RMR [Skiptrash] <src>: Recursively erase All files and directories, equivalent to RM–RF <src> under UNIX.
10,hadoop Fs–rmi [Skiptrash] <src>: equivalent to Unix Rm–rfi <src>.
11,hadoop fs–put <localsrc> ... <dst>: Copy files from the local system to DFS.
12,hadoop fs–copyfromlocal <localsrc> ... <dst>: equivalent to-put.
13,hadoop fs–movefromlocal <localsrc> ... <dst>: equivalent to-put, except that the source file was deleted after the copy.
14,hadoop Fs–get [-IGNORECRC] [-CRC] <src> <localdst>: From DFS copy files to local file systems, files match pattern, if multiple files, DST must be a directory.
15,hadoop fs–getmerge <src> <localdst>: As the name implies, copy multiple files from DFS, merge the sort to a file into a local file system.
16,hadoop Fs–cat <src>: Show file contents.
17,hadoop fs–copytolocal [-IGNORECRC] [-CRC] <src> <localdst>: equivalent to-get.
18,hadoop Fs–mkdir <path>: Creates a directory at the specified location.
19,hadoop Fs–setrep [-R] [-W] <rep> <path/file>: Sets the backup level of the file, the-r flag controls whether to recursively set subdirectories and files.
20,hadoop Fs–chmod [-R] <mode[,mode]...| Octalmode> PATH ... : Modify the permissions of the file, the-R tag is modified recursively. Mode is a+r,g-w,+rwx and so on, Octalmode for 755.
21,hadoop Fs-chown [-R] [Owner][:[group]] PATH ... : Modify the owner and group of the file. -R indicates recursion.
22,hadoop Fs-chgrp [-r] GROUP PATH ... : Equivalent to-chown ...: GROUP ....
23,hadoop Fs–count[-q] <path>: Count the number of files and the details of the occupied space, the column of the output table has the following meanings: Dir_count,file_count,content_size,file_ Name or if you add-Q, the Quota,remaining_quota,space_quota,remaining_space_quota will be listed.
Reprint Path: http://www.blogjava.net/changedi/archive/2013/08/12/402696.html