Hadoop shell command Dictionary (available for favorites)

Source: Internet
Author: User

You can read the following questions:

1. What is the difference between CHMOD and chown?
2. Where does cat output the content of the specified file in the path?
3. Can CP be replicated between different types?
4. How can I view the file size in HDFS?
5. How does HDFS Merge files?
6. How to display all folders and files in the current path
7. Why does RM fail to delete the file?
8. How to view the File Creation Time
9. What are the contents of the file command? Can you name three types?
10. How can I determine whether a file exists?
11. How to Create a 0-Byte File

For commands, we will remember them all at once and may forget them later. You can check them when using them.


Use bin/hadoop FS <ARGs> to call the File System (FS) Shell Command. All FS shell commands use the URI path as the parameter. The URI format is scheme: // authority/path. For HDFS file systems, scheme is HDFS, for local file systems, and scheme is file. The scheme and authority parameters are optional. If not specified, the default scheme specified in the configuration will be used. An HDFS file or directory such as/parent/child can be expressed as HDFS: // namenode: namenodeport/parent/child, or simpler/parent/child (assuming that the default value in your configuration file is namenode: namenodeport ). The behavior of most FS shell commands is similar to that of the corresponding Unix shell commands. The differences are described as follows. The error message is output to stderr, and other information is output to stdout. (Stderr and stdout can be understood as files here)


Cat

Usage: hadoop FS-cat URI [URI…] Output the content of the specified file in the path to stdout. Example:
  • Hadoop FS-cat HDFS: // host1: port1/file1 HDFS: // host2: port2/file2
  • Hadoop FS-cat file: // file3/user/hadoop/file4
Return Value:
0 is returned for success, and-1 is returned for failure.


Chgrp

Usage: hadoop FS-chgrp [-R] group URI [URI…] Change Group Association of files. with-R, make the change recursively through the directory structure. the user must be the owner of files, or else a super-user. additional information is in the permissions user guide. --> change the group to which the file belongs. Use-R to recursively change the directory structure. The user of the command must be the owner or super user of the file. For more information, see the HDFS permission user guide.


Chmod

Usage: hadoop FS-chmod [-R] <mode [, mode]... | octalmode> URI [URI…] Change the File Permission. Use-R to recursively change the directory structure. The user of the command must be the owner or super user of the file. For more information, see the HDFS permission user guide.


Chown

Usage: hadoop FS-chown [-R] [owner] [: [group] URI [URI] changes the object owner. Use-R to recursively change the directory structure. The user of the command must be a Super User. For more information, see the HDFS permission user guide.


Copyfromlocal

Usage: In addition to specifying the source path as a local file, the hadoop FS-copyfromlocal <localsrc> URI is similar to the PUT command.


Copytolocal

Usage: hadoop FS-copytolocal [-ignorecrc] [-CRC] URI <localdst> is similar to the GET command except that the target path is a local file.


CP

Usage: hadoop FS-cp uri [URI…] <DEST> copy the file from the Source Path to the target path. This command allows multiple source paths. The target path must be a directory.
Example:
  • Hadoop FS-CP/user/hadoop/file1/user/hadoop/file2
  • Hadoop FS-CP/user/hadoop/file1/user/hadoop/file2/user/hadoop/Dir
Return Value: 0 is returned for success, and-1 is returned for failure.


Du

Usage: hadoop FS-du URI [URI…] Displays the size of all files in the directory, or displays the size of this file when only one file is specified.
Example:
Hadoop FS-du/user/hadoop/dir1/user/hadoop/file1 HDFS: // host: Port/user/hadoop/dir1
Return Value:
0 is returned for success, and-1 is returned for failure.


DUS

Usage: hadoop FS-DUS <ARGs> displays the file size.


Expunge

Usage: Use hadoop FS-expunge to clear the recycle bin. For more information about the recycle bin feature, see the HDFS design document.


Get

Usage: hadoop FS-Get [-ignorecrc] [-CRC] <SRC> <localdst> copies the file to the local file system. You can use the-ignorecrc option to copy files that failed CRC verification. Use the-CRC option to copy the file and CRC information. Example:
  • Hadoop FS-Get/user/hadoop/file localfile
  • Hadoop FS-Get HDFS: // host: Port/user/hadoop/file localfile
Return Value: 0 is returned for success, and-1 is returned for failure.


Getmerge

Usage: hadoop FS-getmerge <SRC> <localdst> [addnl] accepts a source directory and a target file as the input, and connects all files in the source directory to the destination file at the cost. Addnl is optional and is used to specify a line break at the end of each file.


Ls

Usage: If hadoop FS-ls <ARGs> is a file, the file information is returned in the following format:
File name <Number of replicas> file size modification date modification time permission user ID group ID
If it is a directory, a list of its direct sub-files is returned, just like in UNIX. The list returned by the directory is as follows:
Directory name <dir> modify date modify time permission user ID group ID
Example:
Hadoop FS-ls/user/hadoop/file1/user/hadoop/file2 HDFS: // host: Port/user/hadoop/dir1/nonexistentfile
Return Value:
0 is returned for success, and-1 is returned for failure.


LSR

Usage: hadoop FS-LSR <ARGs>
Recursive version of the LS command. Similar to LS-R in UNIX.


Mkdir

Usage: hadoop FS-mkdir <paths> uses the URI specified by the path as the parameter to create these directories. The behavior is similar to the mkdir-P of Unix. It creates parent directories of all levels in the path. Example:
  • Hadoop FS-mkdir/user/hadoop/dir1/user/hadoop/dir2
  • Hadoop FS-mkdir HDFS: // host1: port1/user/hadoop/dir hdfs: // host2: port2/user/hadoop/Dir
Return Value: 0 is returned for success, and-1 is returned for failure.


Movefromlocal

Usage: DFS-movefromlocal <SRC> <DST> outputs a "not implemented" message.


MV

Usage: hadoop FS-mv uri [URI…] <DEST> move the file from the Source Path to the target path. This command allows multiple source paths. The target path must be a directory. Files cannot be moved between different file systems.
Example:
  • Hadoop FS-mV/user/hadoop/file1/user/hadoop/file2
  • Hadoop FS-mv hdfs: // host: Port/file1 HDFS: // host: Port/file2 HDFS: // host: Port/file3 HDFS: // host: Port/dir1
Return Value: 0 is returned for success, and-1 is returned for failure.


Put

Usage: hadoop FS-put <localsrc>... <DST> copies one or more source paths from the local file system to the target file system. You can also read the input from the standard input and write it to the target file system.
  • Hadoop FS-put localfile/user/hadoop/hadoopfile
  • Hadoop FS-put localfile1 localfile2/user/hadoop/hadoopdir
  • Hadoop FS-put localfile HDFS: // host: Port/hadoop/hadoopfile
  • Hadoop FS-put-HDFS: // host: Port/hadoop/hadoopfile
    Read the input from the standard input.
Return Value: 0 is returned for success, and-1 is returned for failure.


Rm

Usage: hadoop FS-rm uri [URI…] Deletes a specified object. Delete only non-empty directories and files. Refer to the RMR command for Recursive deletion.
Example:
  • Hadoop FS-rm hdfs: // host: Port/file/user/hadoop/emptydir
Return Value: 0 is returned for success, and-1 is returned for failure.


RMR

Usage: hadoop FS-rmr uri [URI…] Recursive version of Delete.
Example:
  • Hadoop FS-RMR/user/hadoop/Dir
  • Hadoop FS-rmr hdfs: // host: Port/user/hadoop/Dir
Return Value: 0 is returned for success, and-1 is returned for failure.


Setrep

Usage: hadoop FS-setrep [-R] <path> changes the copy coefficient of a file. The-r option is used to recursively change the copy coefficient of all files in the directory. Example:
  • Hadoop FS-setrep-W 3-r/user/hadoop/dir1
Return Value: 0 is returned for success, and-1 is returned for failure.


Stat

Usage: hadoop FS-stat URI [URI…] Returns the statistics of the specified path. Example:
  • Hadoop FS-stat path
Return Value:
0 is returned for success, and-1 is returned for failure.


Tail

Usage: hadoop FS-tail [-F] URI outputs 1 kb of content at the end of the file to stdout. The-F option is supported, and the behavior is consistent with that in UNIX. Example:
  • Hadoop FS-tail pathname
Return Value:
0 is returned for success, and-1 is returned for failure.


Test

Usage: hadoop FS-test-[ezd] URI option:
-E: Check whether the file exists. If yes, 0 is returned.
-Z: Check whether the file is 0 bytes. If yes, 0 is returned.
-D if the path is a directory, 1 is returned; otherwise, 0 is returned. Example:
  • Hadoop FS-test-e filename


Text

Usage: hadoop FS-text <SRC> outputs the source file to the text format. The allowed formats are zip and textrecordinputstream.


Touchz

Usage: hadoop FS-touchz URI [URI…] Create a 0-byte empty file. Example:
  • Hadoop-touchz pathname
Return Value:
0 is returned for success, and-1 is returned for failure.

 

Recommended articles

Hadoop entry: Summary of hadoop shell commands

 

Article transferred from: http://www.aboutyun.com/thread-6983-1-1.html

 

Hadoop shell command Dictionary (available for favorites)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.