Original address: http://www.cuiweiyou.com/1405.html
0.shell
[Email protected]:~# Hadoop Fsusage:hadoop fs [generic options] [-appendtofile <localsrc> ... <dst>] [-cat [-IGNORECRC] <src> ...] [-checksum <src> ...] [-CHGRP [-R] GROUP PATH ...] [-chmod [-R] <mode[,mode] ... | Octalmode>PATH ...] [-chown [-R] [Owner][:[group]] PATH ...] [-copyfromlocal [-f] [-P] [-l] <localsrc> ... <dst>] [-copytolocal [-P] [-IGNORECRC] [-CRC] <src> ... <localdst>] [-count [-Q] [-h] <path> ...] [-CP [-f] [-P |-p[topax]] <src> ... <dst>] [-createsnapshot <snapshotDir> [<snapshotname>]] [-deletesnapshot <snapshotDir> <snapshotName>] [-DF [-h] [<path> ...]] [-du [-S] [-h] <path> ...] [-Expunge] [-find <path> ... <expression> ...] [-get [-P] [-IGNORECRC] [-CRC] <src> ... <localdst>] [-getfacl [-R] <path>] [-getfattr [-r] {-N name |-d} [-E en] <path>] [-getmerge [-NL] <src> <localdst>] [-Help [cmd ...]] [-ls [-d] [-h] [-R] [<path> ...]] [-mkdir [-P] <path> ...] [-movefromlocal <localsrc> ... <dst>] [-movetolocal <src> <localdst>] [-MV <src> ... <dst>] [-put [-f] [-P] [-l] <localsrc> ... <dst>] [-renamesnapshot <snapshotDir> <oldName> <newName>] [-RM [-F] [-r|-r] [-skiptrash] <src> ...] [-rmdir [--ignore-fail-on-non-empty] <dir> ...] [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]| [--set <acl_spec> <path>]] [-setfattr {-n name [-V value] |-x name} <path>] [-setrep [-R] [-W] <rep> <path> ...] [-stat [format] <path> ...] [-tail [-F] <file>] [-test-[defsz] <path>] [-text [-IGNORECRC] <src> ...] [-touchz <path> ...] [-truncate [-W] <length> <path> ...] [-usage [cmd ...] Generic Options supported are-conf <configuration file>Specify an application configuration file-D <property=value> Use value forgiven property-fs <local|namenode:port>Specify a Namenode-JT <local|resourcemanager:port>Specify a ResourceManager-files <comma separated list of files>specify comma separated files to being copied to the map reduce cluster-libjars <comma separated list of jars> specify Comma separated jar files to includeinchThe classpath.-archives <comma separated list of archives>Specify comma separated archives to being unarchived on the compute machines. The General Command line syntax Isbin/hadoop command [genericoptions] [commandoptions][email protected]:~#
1. print file list ls
(1) standard notation -ls hdfs:/ #hdfs: Clear Description is the HDFS system path
(2-ls/ #默认是HDFS系统下的根目录
(3 ) Print the specified subdirectory -ls/package/test/ #HDFS系统下某个目录
(5-ls-h/ #文件大小显示为最大单位
-rw-r–r–2 root supergroup 4.6M 2015-05-07 10:43/dead_train.txt
Permissions Replicas user group size creation date creation time path file name
(6-ls-r/#如果有子目录, then recursive printing
(7-ls-h-r/ #递归打印并且最大单位打印文件
2. Upload file/directory put, copyfromlocal
2.1 put from the host local system to the clustered HDFs system.
<1> Upload a new file -put file:/root/test.txt hdfs:/ #上传本地test. txt file to the HDFs root directory, the HDFs root must have no file with the same name, otherwise " File exists " -put test.txt/test2.txt #上传并重命名文件. HDFs fs-put test1.txt test2.txt hdfs:/ #一次上传多个文件到HDFS路径.
<2> Upload folder -put mypkg/newpkg #上传并重命名了文件夹.
<3> Overlay Upload -put-f/root/test.txt/ #如果HDFS目录中有同名文件会被覆盖
2.2copyFromLocal
<1> upload files and rename -copyfromlocal file:/test.txt hdfs:/<2> overwrite uploads - Copyfromlocal-f Test.txt/test.txt
3. download file/directory get, Copytolocal
From cluster HDFs to local file system.
3.1 Get
<1> copy files to a local directory -get hdfs:/test.txt file:/root/<2> copy files and rename them. Can be abbreviated -get/test.txt/root/test.txt
3.2 copytolocal
<1> copy files to a local directory -copytolocal hdfs:/test.txt file:/root/<2> copy files and rename them. Can be abbreviated -copytolocal/test.txt/root/test.txt
4. copy File/directory CP
4.1 from local to HDFs, with put
Hadoop FS-CP file:/test.txt Hdfs:/test2.txt
4.2 from HDFs to HDFs
Hadoop fs-cp hdfs:/test.txt hdfs:/-cp/test.txt/test2.txt
5. Move the file MV
Hadoop fs-mv hdfs:/test.txt hdfs:/dir/test.txt Hadoop fs-mv/test.txt/dir/test.txt
6. Delete files/directories RM
6.1 Deleting the specified file
Hadoop fs-rm/a.txt
6.2 Delete all txt files
/* . txt
6.3 Recursively delete all files and directories
Hadoop fs-rm-r/dir/
7. Read the file cat
Hadoop fs-cat/test.txt #以字节码的形式读取
8. Read the file tail tail
Hadoop fs-tail/test.txt Tail 1K bytes
9. Create an empty file Touchz
Hadoop fs-touchz/newfile.txt
10. Write File Appendtofile
Hadoop fs-appendtofile file:/test.txt hdfs:/newfile.txt #读取本地文件内容追加到HDFS文件
11. Create a folder mkdir
Hadoop fs-mkdir/newdir/newdir2 -mkdir-p/newpkg/newpkg2/newpkg3 #同时创建父级目录
12. Change the number of copies of files Setrep
Hadoop Fs-setrep-r-W 2/test.txt
-R recursively changes the number of copies of all files in the directory.
-W waits for the number of replicas to be returned after adjustment. It can be understood that the addition of this parameter is a blocking type.
13. Get the logical space File/directory size du
Hadoop fs-du/ #显示HDFS根目录中各文件和文件夹大小 -du-h/ #以最大单位显示HDFS根目录中各文件和文件夹大小 -du-s/ #仅显示HDFS根目录大小. That is, the size of each file and folder
14. Get the physical spatial information for the HDFS directory count
Hadoop fs-count-q/ #显示HDFS根目录在物理空间的信息
-Q View all information, otherwise only the following four items are displayed
HDFs Common shell commands (reprint)