Apache hadoop2.4.1 filesystem Shell

Source: Internet
Author: User
Tags hdfs dfs
Overview

The file system (FS) shell contains various commands for interacting with HDFS, such as local FS, hftp FS, S3 FS, and other. FS shell. Run the following command:

Bin/hdfs dfs <ARGs>

Path URI can be used as a parameter for all Fs shells. The URI format is scheme: Authority/path. The scheme of HDFS is HDFS, and the scheme of the local file is file. Scheme and authority are optional. If not specified, scheme specified in the configuration file is used by default. HDFS files or directories such as/parent/child can be specified as HDFS: // namdenodehost/parent/child or a simple path.

Most FS commands are similar to the spelling of Unix commands. The difference is the description of each command. The error is sent to stderr or output to stdout.

Appendtofile

Usage: hdfs dfs-appendtofile <localsrc>... <DST>

Append a simple SRC file or multiple SRCS files from the local file to the target file system. You can also read input from stdin or append it to the target file.

Hdfs dfs-appendtofile localfile/usr/hadoop/hadoopfile

Hdfs dfs-appendtofile localfile1 localfile2/usr/hadoop/hadoopfile

Hdfs dfs-appendtofile localfile HDFS: // nn.example.com/hadoop/hadoopfile

Hdfs dfs-appendtofile-HDFS: // nn.example.com/hadoop/hadoopfile reads the input from stdin.

Exit code: 0 indicates success, and 1 indicates failure.

Cat

Usage: hdfs dfs-cat URI [uri...]

Copy the Source Path to stdout.

For example:

Hdfs dfs-cat HDFS: // nn1.example.com/file1/ HDFS: // nn2.example.com/file2/

Hdfs dfs-cat file: // file3/user/hadoop/file4

Exit code: 0 indicates success, and 1 indicates failure.

Chgrp

Usage: hdfs dfs-chgrp [-R] group URI [uri...]

Change the group to which the file belongs. It must be your own file or a super administrator. For more information, see permissions guide.

Options

The-r option will recursively change the directory structure.

Chmod

Usage: hdfs dfs-chmod [-R] <mode [, mode]... | octalmode> URI [uri...]

Change the File Permission. Use-R to change the directory structure cyclically. The file must belong to the user or be a super administrator. For more information, see permissions guide .,

Options

The-r option will recursively change the directory structure.

Chown

Usage: hdfs dfs-chown [-R] [owner] [: [group] URI [URI]

Change the owner of a file. The modified user must be a super administrator. For more information, see permissions guide.

Options

The-r option will recursively change the directory structure.

Copyfromlocal

Usage: hdfs dfs-copyfromlocal <localhostsrc> URI

Similar to the PUT command, this source is a restricted local system reference.

Options:

-F indicates that the target file will be overwritten if it already exists.

Copytolocal

Usage: hdfs dfs-copytolocal [ignorecrc] [-CRC] URI [localdst]

Similar to the GET command, except this source is a restricted local system reference.

Count

Usage: hdfs dfs-count [-q] <paths>

Count the number of directories, files, and bytes that match the specified file. The columns output with-count are: file_count, content_size file_name.

Columns output with-count-Q: quota, remaining_quata, space_quota, remaining_space_quota, dir_count, file_count, content_size, file_name

For example:

Hdfs dfs-count HDFS: // nn1.example.com/file1 HDFS: // nn.example.com/file2

Hdfs dfs-count-q hdfs: // nn1.example.com/file1

Exit code: 0 indicates success, and 1 indicates failure.

CP

Usage: hdfs dfs-CP [-F] URI [uri...] <DEST>

Copy the file from source to destination. This command allows multiple sources, but the target file must be a directory.

Option:

-F indicates that the object will be overwritten when the target file exists.

For example:

Hdfs dfs-CP/user/hadoop/file1/user/hadoop/file2

Hdfs dfs-CP/user/hadoop/file1/user/hadoop/file2/user/hadoop/Dir

Exit code: 0 indicates success, and 1 indicates failure.

Du

Usage: hdfs dfs-Du [-S] [-H] URI [uri...]

Option:

The-s option will result in an aggregate summary of file lengths being displayed, rather than the individual files.

The-H option will format file sizes in a "human-readable" fashion (e. g 64.0 m instead of 67108864)

Example:

  • Hdfs dfs-du/user/hadoop/dir1/user/hadoop/file1 HDFS: // nn.example.com/user/hadoop/dir1

Exit code: 0 indicates success, and 1 indicates failure.

DUS

Usage: hdfs dfs-DUS <ARGs>

Display the length of the file. This is another common format of hdfs dfs-du-s.

Expunge

Usage: hdfs dfs-expunge

Clear the recycle bin. For more information, see HDFS architecture guide.

Get

Usage: hdfs dfs [-ignorecrc] [-CRC] <SRC> <localdst>

Copy a file to a local file system. If CRC check fails, you can use the copied option to copy the file. Files and CRCs can use the-CRC option to copy the file.

Example:

Hdfs dfs-Get/user/hadoop/file localfile

Hdfs dfs-Get HDFS: // nn.example.com/user/hadoop/file localfie

0 indicates success, and 1 indicates failure.

Getfacl

Usage: hdfs dfs-getfacl [-R] <path>

Displays the access control list of files or directories. If the directory has a default ACL (Access Control List), getfacl can also display the default ACL.

Options:

-R: recursively lists all ACLs in the file directory.

-Path: file or directory path.

Example:

Hdfs dfs-getfacl/File

Hdfs dfs-getfacl-r/File

0 indicates success, and 1 indicates failure.

Getmerge

Usage: hdfs dfs-getmerge <SRC> <localdst> [addn1]

Ls

Usage: hdfs dfs-ls <ARGs>

The following format returns the File status:

Permissions nuber_of_replicas userid groupid filesize modification_data modification_time filename

Return the sub-directory information. A list of directories is as follows:

Permissions userid groupid modification_date modification_tie dirname.

Example:

Hdfs dfs-ls/user/hadoop/file1

0 is returned, and 1 is returned.

LSR

Usage: hdfs dfs-LSR <ARGs>

The recursive version of LS, similar to the Unix LS-R

Mkdir

Usage: hdfs dfs-mkdr [-p] <paths>

Use the URL path as a parameter to create a directory.

Options:

The behavior of the-P option is similar to that of UNIX mkdir-P, and the parent directory path is created.

Example:

Hdfs dfs-mkdir/user/hadoop/dir1/user/hadoop/dir2

Hdfs dfs-mkdir HDFS: // nn1.example.com/user/hadoop/dir1 HDFS: // nn1.example.com/user/hadoop/dir2

0 is returned, and 1 is returned.

Movefromlocal

Usage: hdfs dfs-movefromlocal <localsrc> <DST>

Similar to the PUT command, except that the source localsrc is deleted after it is copied.

Movetolocal

Usage: hdfs dfs-movetolocal [-CRC] [SRC] [DST]

Display a message of "not implemented yet"

MV

Usage: hdfs dfs-mv uri [uri...] <DEST>

Move a file from the destination. This command allows multiple directories to be used as the destination file. Moving Files in the file system is not allowed.

Example:

Hdfs dfs-mV/user/hadoop/file1/user/hadoop/file2

Hdfs dfs-mv hdfs: // nn.example.com/file1 HDFS: // nn.example.com/file2 HDFS: // nn.example.com/file3 HDFS: // nn.example.com/dir1

0 indicates success, and 1 indicates failure.

Put

Usage: hdfs dfs-put <localsrc>... <dstsrc>

Copy one or more local data sources to the target file system. You can also read the input data from stdin and write the data to the target file system.

Hdfs dfs-put localfile/user/hadoop/hadoopfile

Hdfs dfs-put localfile/user/hadoop/hadoopdir

Hdfs dfs-put localfile HDFS: // nn.example.com/hadoopfile

Hdfs dfs-put-HDFS: // nn.example.com/hadoopfile (read input from stdin)

Exit code: 0 indicates success, and-1 indicates failure.

Rm

Usage: hdfs dfs-RM [-skiptrash] URI [uri...]

Delete the file with the specified parameter. Only non-empty directories are deleted. If-skiptrash is specified and trash is allowed, the specified file will be deleted immediately. This may be useful when it is necessary to delete files from an over-quota directory. Reference RAR recursive deletion.

Example:

Hdfs dfs-rm hdfs: // nn.example.com/file/user/hadoop/emptydir

Exit code: 0 indicates success, and-1 indicates failure.

RMR

Usage: hdfs dfs-RMR [-skiptrash] URI [uri...]

The recursive version to delete. If-skiptrash is specified and trash is allowed, the specified file will be deleted immediately. This may be useful when an over-quota directory is necessary to delete files.

Example:

Hdfs dfs-rmr hdfs: // nn.example.com/file/

Hdfs dfs-RMR/user/hadoop/Dir

Exit code: 0 indicates success, and-1 indicates failure.

Setfacl

Usage: hdfs dfs-setfacl [-R] [-B |-k |-M |-x <acl_spec> <path>] | [-- set <acl_spec> <path>]

A collection of file and directory access control lists.

Options:

-B:

-K:

-R:

-M:

-X:

-- Set:

Acl_spec:

Path:

Example:

Hdfs dfs-setfacl-M User: hadoop: RW-File

Hdfs dfs-setfacl-x User: hadoop/File

Hdfs dfs-setfacl-B/File

Hdfs dfs-setacl-k/Dir

Hdfs dfs-setacl -- Set User: RW-, user: hadoop: RW-, group: r --, other: r --/File

Hdfs dfs-setacl-r-m User: hadoop: R-W/Dir

Hdfs dfs-setacl-M default: User: hadoop: R-x/Dir

Exit code: 0 indicates success, and-1 indicates failure.

Setrep

Usage: hdfs dfs-setrep [-R] [-W] <numreplicas> <path>

Change the repeat factor of a file. If path is a directory, this command recursively changes the repeat factor in a tree.

Options:

-W indicates that the request command is waiting for the replication to complete, which may take a long time.

-R

Example:

Hdfs dfs-setrep-W 3/user/hadoop/dir1

Exit code: 0 indicates success, and-1 indicates failure.

Stat

Usage: hdfs dfs-stat URI [uri...]

Returns the Information Status in path.

Example:

Hdfs dfs-stat path

Exit code: 0 indicates success, and-1 indicates error.

Tail

Usage: hdfs dfs-tail [-F] URI

Display the output file in kilobytes.

Options:

-F option: append the output data as the file growth.

Example:

Hdfs dfs-tail pathname

Exit code: 0 indicates success, and-1 indicates error.

Test

Usage: hdfs dfs-test-[ezd] URI

Options:

-E option is used to check whether a file exists. If yes, 0 is returned.

The-Z option is used to check whether the file length is 0. If the file length is 0, 0 is returned.

-D option is used to check whether the path is a directory, and 0 is returned to indicate true

Example:

Hdfs hdf-test-e filename

Text

Usage: hdfs dfs-text <SRC>

Let a source file be output in text format. The allowed formats are zip and textrecordinputstream.

Touchz

Usage: hdfs dfs-touchz URI [uri...]

Create a file with a length of 0.

Example:

Hdfs dfs-touchz pathname

Exit code: 0 indicates success, and-1 indicates failure.

Apache hadoop2.4.1 filesystem Shell

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.