Hadoop Basics Tutorial-3rd Chapter HDFS: Distributed File System (3.5 HDFS Basic command) (draft) __hadoop

Source: Internet
Author: User
Tags mkdir stdin hdfs dfs hadoop fs
3rd Chapter HDFS: Distributed File System 3.5 HDFs Basic Command

HDFs Order Official documents:
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html 3.5.1 Usage

[Root@node1 ~]# HDFs dfs usage:hadoop FS [generic options] [-appendtofile <localsrc> ... <dst>] [-cat
    [-IGNORECRC] <src> ...]
    [-checksum <src> ...]
    [-CHGRP [-R] GROUP PATH ...] [-chmod [-R] <mode[,mode] ... |
    Octalmode> PATH ...]
    [-chown [R] [Owner][:[group]] PATH ...]
    [-copyfromlocal [-f] [-P] [-l] <localsrc> <dst>]
    [-copytolocal [P] [-IGNORECRC] [-CRC] <src> ... <localdst>]
    [-count [-Q] [-h] [-v] [-X] <path> ...]
    [-CP [-f] [P |-p[topax]] <src> ... <dst>]
    [-createsnapshot <snapshotDir> [<snapshotname>]]
    [-deletesnapshot <snapshotDir> <snapshotname>]
    [-DF [-H] [<path>]]
    [-DU [-S] [-h] [-X] <path> ...]
    [-expunge]
    [-find <path> ... <expression> ...]
    [-get [P] [-IGNORECRC] [-CRC] <src> ... <localdst>]
    [-getfacl [R] <path>] [-getfattr [-R] {-N name |-d} [-E en] <paTH&GT] [-getmerge [-NL] <src> <localdst>] [-help [cmd ...]]
    [-ls [-C] [-d] [-h] [-Q] [-r] [-T] [-S] [-r] [-u] [<path>]]
    [-mkdir [P] <path> ...]
    [-movefromlocal <localsrc> ... <dst>]
    [-movetolocal <src> <localdst>]
    [-mv <src> ... <dst>]
    [-put [-f] [-P] [-l] <localsrc> <dst>]
    [-renamesnapshot <snapshotDir> <oldName> <newname>]
    [-RM [F] [-r|-r] [-skiptrash] <src> ...]
    [-rmdir [--ignore-fail-on-non-empty] <dir> ...] [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|
    [--set <acl_spec> <path>]]
    [-setfattr {-n name [-V value] |-x name} <path>]
    [-setrep [-R] [W] <rep> <path> ...]
    [-STAT [format] <path> ...]
    [-tail [F] <file>]
    [-test-[defsz] <path>]
    [-text [-IGNORECRC] <src> ...]
    [-touchz <path> ...]
[-usage [cmd ...]] Generic optIons supported are-conf <configuration file> Specify an application configuration file-d            ; Use value for given Property-fs <local|namenode:port> specify a NAMENODE-JT <local|resourcemanager:port&gt    ; Specify a resourcemanager-files <comma separated list of files> specify comma separated files to is copied to th E map reduce cluster-libjars <comma separated list of jars> specify Comma separated jar files to include
Classpath. -archives <comma separated list of archives> specify Comma separated archives to is unarchived on the compute mac
Hines. The General Command line syntax is Bin/hadoop command [genericoptions] [commandoptions]
3.5.2 HDFs Dfs-mkdir

THE-P option behavior is much like Unix mkdir-p, creating parent directories along the path.

[Root@node1 ~]# HDFs dfs-mkdir-p input [root@node1 ~]# HDFs dfs-mkdir-p-/ABC

The directory created by HDFs will be placed under the/user/{username}/directory by default, where {username} is the current user name. So the input directory should be under/user/root/.

Create an ABC directory under the HDFs root directory. 3.5.3 HDFs Dfs-ls

[Root@node1 ~]# HDFs dfs-ls/
Found 2 Items
drwxr-xr-x   -root supergroup          0 2017-05-14
09:40/abc drwx R-xr-x   -root supergroup          0 2017-05-14 09:37/user [root@node1 ~]# hdfs dfs-ls/user-
Found 1 Items
Drwxr-xr-x   -root supergroup          0 2017-05-14 09:37/user/root
[root@node1
~]# HDFs-Dfs-ls/user/root Found 1 Items
drwxr-xr-x   -root supergroup          0 2017-05-14 09:37/user/root/input
3.5.4 HDFs dfs-put

Usage:hdfs dfs-put ...
Copy Single-SRC, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system.
HDFs dfs-put localfile/user/hadoop/hadoopfile
HDFs dfs-put localfile1 localfile2/user/hadoop/hadoopdir
HDFs dfs-put localfile hdfs://nn.example.com/hadoop/hadoopfile
HDFs dfs-put-hdfs://nn.example.com/hadoop/ Hadoopfile reads the input from stdin.
Exit Code:
Returns 0 on success and-1 on error.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.