Hadoop Shell Command official website translation

Source: Internet
Author: User
Tags base64 parent directory hdfs dfs hadoop fs

Http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/FileSystemShell.html#Overview
FS Shellthe call file system (FS) shell command should use the form bin/hadoop FS <args> . All of the FS shell commands use URI paths as parameters. The URI format is scheme://authority/path. For the HDFs file system, Scheme is HDFs, to the local file system, scheme is file. The scheme and authority parameters are optional, and if not specified, the default scheme specified in the configuration is used. An HDFs file or directory such as /parent/child can be represented as Hdfs://namenode:namenodeport/parent/child, or simpler /parent/ Child (assuming that the default value in your configuration file is namenode:namenodeport). The behavior of most FS shell commands is similar to that of the corresponding Unix shell commands, and the differences are noted below when the commands are used in detail. Error messages are output to stderr, and other information is output to stdout.
    • Appendtofile
      • Usage:hdfs dfs-appendtofile <localsrc> ... <dst>
      • Appends a single file or multiple files of the local file system to the target file system, supporting STDIN standard input
    • HDFs Dfs-appendtofile Localfile/user/hadoop/hadoopfile
    • HDFs dfs-appendtofile Localfile1 Localfile2/user/hadoop/hadoopfile
    • HDFs dfs-appendtofile LocalFile Hdfs://nn.example.com/hadoop/hadoopfile
    • HDFs dfs-appendtofile-hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
    • Exit Code:
      Returns 0 on success and 1 on error.
  • Cat
    • How to use:hdfs dfs -cat uri [uri ...]
    • Outputs the contents of the path-specified file to stdout.
    • Example:
      • HDFs dfs -cat hdfs://host1:port1/file1 hdfs://host2:port2/file2
      • HDFs dfs -cat file:///file3/user/hadoop/file4
    • Return value: Successfully returned 0, failure returned-1.
  • Chgrp
    • How to use:hdfs dfs -chgrp [-R] GROUP uri [uri ...]
    • Change the group to which the file belongs. Using-R will make the changes recursive under the directory structure. The user of the command must be the owner or superuser of the file. For more information, see the HDFs Permissions User Guide.
  • chmod
    • How to use:hdfs dfs -chmod [-r] <mode[,mode] ... | octalmode> uri [uri ...]
    • Permissions to change the file. Using-R will make the changes recursive under the directory structure. The user of the command must be the owner or superuser of the file. For more information, see the HDFs Permissions User Guide.
  • Chown
    • How to use:hdfs dfs -chown [-R] [Owner][:[group]] uri [URI]
    • Change the owner of the file. Using-R will make the changes recursive under the directory structure. The user of the command must be a superuser. For more information, see the HDFs Permissions User Guide.
  • Copyfromlocal
    • Usage:hdfs dfs-copyfromlocal <localsrc> URI
    • In addition to qualifying the source path as a local file, it is similar to the put command.
    • Options: The-F option overrides the target if the target already exists.
  • Copytolocal
    • Usage:hdfs dfs-copytolocal [-IGNORECRC] [-CRC] URI <localdst>
    • In addition to qualifying the target path as a local file, it is similar to the Get command.
  • Count
    • Usage:hdfs Dfs-count [-Q] <paths>
    • Calculates the number of directories, the files and bytes of the path that match the specified file pattern.
    • Using the-count output column is: Dir_count, File_count, content_size file_name
    • Using-count-q output columns are: QUOTA, Remaining_quata, Space_quota, Remaining_space_quota, Dir_count, File_count, Content_size, FILE_ NAME
    • Example:
      • HDFs Dfs-count Hdfs://nn1.example.com/file1 Hdfs://nn2.example.com/file2
      • HDFs dfs-count-q Hdfs://nn1.example.com/file1
    • Exit code:returns 0 on success and-1 on error.
  • Cp
    • Usage:hdfs DFS-CP [-f] [-P |-p[topax]] URI [uri ...] <dest>
    • Copies the file from the source path to the destination path. This command allows for multiple source paths, at which point the destination path must be a directory.
      • The OPTIONS:-F option overrides the target.
      • The-p option will save the file attributes [TOPX] (timestamp, ownership, license, ACL,XATTR). If-p is not specific, then save the timestamp, ownership, and license. If the PA is specified, then the permission is also reserved because the ACL is an allowed hyper-collection
    • Example:
      • HDFs Dfs-cp/user/hadoop/file1/user/hadoop/file2
      • HDFs Dfs-cp/user/hadoop/file1/user/hadoop/file2/user/hadoop/dir
    • Exit code:returns 0 on success and-1 on error.
  • Du
    • Usage:hdfs Dfs-du [-S] [-h] uri [URI ...]
    • Displays the size of all files in the directory, or when you specify only one file, the size of this file is displayed.
    • Options:
      • The-s option will display a general summary of the length of the file, rather than a single file.
      • -h option will format in "human readable" fashion size file (e.g. 64.0M instead of 67108864)
    • Example:hdfs Dfs-du/user/hadoop/dir1/user/hadoop/file1 Hdfs://nn.example.com/user/hadoop/dir1
    • Exit code:returns 0 on success and-1 on error.
  • Dus
    • Usage:hdfs Dfs-dus <args>
    • Displays a summary of the file length. This is a fallback state for the dfs–du–s of HDFs.
  • Expunge
    • Usage:hdfs Dfs-expunge
    • Empty the Recycle Bin. Refer to the HDFs design documentation for more information about the properties of the Recycle Bin.
  • Get
    • Usage:hdfs Dfs-get [-IGNORECRC] [-CRC] <src> <localdst>
    • Copy the file to the local file system. The-IGNORECRC option can be used to replicate the failed file for CRC validation. Use the-CRC option to copy files and CRC information.
    • Example:
      • HDFs Dfs-get/user/hadoop/file LocalFile
      • HDFs Dfs-get hdfs://nn.example.com/user/hadoop/file LocalFile
    • Exit code:returns 0 on success and-1 on error.
  • Getfacl
    • Usage:hdfs Dfs-getfacl [-R] <path>
    • Files and directories that display access control lists (ACLs). If a directory has a default ACL, then Getfacl also displays the default ACL.
    • Options:
      • -r:list the ACLs of all files and directories recursively.
      • Path:file or directory to list.
    • Examples:
      • HDFs Dfs-getfacl/file
      • HDFs Dfs-getfacl-r/dir
    • Exit Code:returns 0 on success and Non-zero on error.
  • Getfattr
    • Usage:hdfs dfs-getfattr [-r]-N name | -D [-e en] <path>
    • Displays the extended properties of a file or directory and the value
    • Options:
      • -R: Recursive list
      • -N Name: The extended attribute is specified from.
      • -D Dump all extended property values and paths
      • -E Encoding: Retrieves after the encoded value. Valid encodings are "text", "Hex", and "Base64". The encoded value is a text string enclosed in double quotation marks ("), with the encoded value of 16 and Base64 prefixed with 0x and 0s respectively
      • Path: file or directory.
    • Examples:
      • HDFs dfs-getfattr-d/file
      • HDFs dfs-getfattr-r-N user.myattr/dir
    • Exit Code:returns 0 on success and Non-zero on error.
  • Getmerge
    • Usage:hdfs dfs-getmerge <src> <localdst> [ADDNL]
    • Accepts a source directory and a destination file as input, and connects all the files in the source directory to the local destination file. ADDNL is optional and is used to specify that a line break is added at the end of each file.
  • Ls
    • Usage:hdfs Dfs-ls <args>
    • If it is a file, the file information is returned in the following format:
    • File name < copy count > Size Modified Date Modify time permission User ID Group ID
    • If it is a directory, it returns a list of its immediate sub-files, as in Unix.
    • The directory returns information for the list as follows:
      • Directory name <dir> Modified date Modify time permission User ID Group ID
    • Exit code:returns 0 on success and-1 on error.
  • Lsr
    • How to use: Hadoop FS-LSR <args>
    • The recursive version of the LS command. Similar to the Ls-r in Unix.
  • Mkdir
    • Usage:hdfs Dfs-mkdir [-p] <paths>
    • Accept the URI specified by the path as a parameter to create these directories. It behaves like a Unix mkdir-p, which creates levels of parent directories in the path.
    • The OPTIONS:-P option behaves much like Unix mkdir-p, creating a parent directory path.
  • Movefromlocal
    • Usage:hdfs dfs-movefromlocal <localsrc> <dst>
    • A command similar to put, except that the source LOCALSRC is deleted after it has been copied.
  • Movetolocal
    • Usage:hdfs dfs-movetolocal [-CRC] <src> <dst>
    • Displays a "Not implemented yet" message.
  • Mv
    • Usage:hdfs dfs-mv uri [uri ...] <dest>
    • Moves the file from the source path to the destination path. This command allows for multiple source paths, at which point the destination path must be a directory. Files are not allowed to move between different file systems.
  • Put
    • Usage:hdfs dfs-put <localsrc> ... <dst>
    • Copy single or multiple source paths from the local file system to the target file system. Read input from standard input is also supported to write to the target file system.
  • Rm
    • Usage:hdfs dfs-rm [-skiptrash] uri [uri ...]
    • Deletes the file for the specified parameter. Only non-empty directories and files are deleted. If The-skiptrash option is specified, the Trash, if enabled, 'll be bypassed and the specified file (s) deleted immediate Ly. This can was useful when it was necessary to the delete files from a Over-quota directory. Refer to RMR recursive delete.
  • RMr
    • Usage:hdfs DFS-RMR [-skiptrash] uri [uri ...]
    • The recursive version of the deletion. If The-skiptrash option is specified, the Trash, if enabled, 'll be bypassed and the specified file (s) deleted immediate Ly. This can was useful when it was necessary to the delete files from the Over-quota directory.
  • Setfacl
    • Usage:hdfs Dfs-setfacl [-R] [-b|-k-m|-x <acl_spec> <path>]| [--set <acl_spec> <path>]
    • Sets the access control List (ACL) for files and directories.
    • Options:
      • -b:remove all but the base ACL entries. The entries for user, group and others is retained for compatibility with permission bits.
      • -k:remove the default ACL.
      • -r:apply operations to all files and directories recursively.
      • -m:modify ACL. New entries is added to the ACL, and existing entries is retained.
      • -x:remove specified ACL entries. The other ACL entries is retained.
      • --set:fully Replace the ACL, discarding all existing entries. The Acl_spec must include entries for user, group, and others for compatibility with permission bits.
      • Acl_spec:comma separated list of ACLs entries.
      • Path:file or directory to modify.
    • Examples:
      • HDFs dfs-setfacl-m user:hadoop:rw-/file
      • HDFs dfs-setfacl-x User:hadoop/file
      • HDFs Dfs-setfacl-b/file
      • HDFs dfs-setfacl-k/dir
      • HDFs dfs-setfacl--set user::rw-,user:hadoop:rw-,group::r--, other::r--/file
      • HDFs Dfs-setfacl-r-M User:hadoop:r-x/dir
      • HDFs dfs-setfacl-m Default:user:hadoop:r-x/dir
  • Setfattr
    • Usage:hdfs dfs-setfattr-n Name [-V value] | -X Name <path>
    • Sets the name and value of an extended property for a file or directory.
    • Options:
      • -b:remove all but the base ACL entries. The entries for user, group and others is retained for compatibility with permission bits.
      • -N Name:the extended attribute name.
      • -V Value:the extended attribute value. There is three different encoding methods for the value. If The argument is enclosed in double quotes and then the value is the string inside the quotes. If The argument is prefixed with 0x or 0X and then it is taken as a hexadecimal number. If the argument begins with 0s or 0S, then it is taken as a base64 encoding.
      • -X Name:remove the extended attribute.
      • path:the file or directory.
    • Examples:
      • HDFs dfs-setfattr-n user.myattr-v Myvalue/file
      • HDFs dfs-setfattr-n User.novalue/file
      • HDFs dfs-setfattr-x User.myattr/file
  • Setrep
    • Usage:hdfs Dfs-setrep [-R] [-W] <numReplicas> <path>
    • Changes the copy factor of the file. If path is a directory, then the command recursively alters the replication factor for all files under this directory tree path.
    • Options:
      • THE-W flag requests that the command, wait for the replication to complete. This can potentially take a very long time.
      • THE-R flag is accepted for backwards compatibility. It has no effect.
    • Example:hdfs dfs-setrep-w 3/user/hadoop/dir1
  • Stat
    • Usage:hdfs dfs-stat uri [uri ...]
    • Returns the statistics for the specified path.
    • Example:hdfs Dfs-stat Path
  • Tail
    • Usage:hdfs Dfs-tail [-f] URI
    • Outputs the contents of the 1K bytes at the end of the file to stdout. The-f option is supported, and behaves the same as UNIX.
  • Test
    • Usage:hdfs dfs-test-[ezd] URI
    • Options:
      • -e checks whether the file exists. Returns 0 if it exists.
      • -Z Checks if the file is 0 bytes. Returns 0 if it is.
      • -D returns 1 if the path is a directory, otherwise 0 is returned.
  • Text
    • Usage:hdfs Dfs-text <src>
    • Output the source file as text format. The allowed formats are zip and Textrecordinputstream.
  • Touchz
    • Usage:hdfs dfs-touchz uri [uri ...]
    • Create a 0-byte empty file.
    • Example:
      • Hadoop-touchz pathname






















Hadoop Shell Command official website translation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.