following rules:
It is preferred to read data on the local rack.
Commands commonly used in HDFS
1. hadoop fs
Hadoop fs-ls/hadoop fs-lsr hadoop fs-mkdir/user/hadoop fs-put a.txt/user/hadoop
information on trash feature.
Get
Usage: hadoop FS-Get [-ignorecrc] [-CRC]
Copy files to the local file system. files that fail the CRC check may be copied with the-ignorecrc option. Files and CRCs may be copied using the-CRC option.
Example:
Hadoop FS-Get/user/hadoop/file localfile
Hadoop FS-Get HDFS: // nn.examp
verification. Use the-crc option to copy the file and CRC information.
Example:
hadoop fs -get /user/hadoop/file localfile hadoop fs -get hdfs://host:port/user/hadoop/file localfile
Getmerge
Usage:
hadoop fs -getmerge
Accept a source directory and a target file as input,
same Passphrase again:Your identification has been saved In/home/hadoop/.ssh/id_rsa.Your public key has been saved in/home/hadoop/.ssh/id_rsa.pub.The key fingerprint is:8d:81:09:c8:45:3f:c0:fb:3b:a0:cf:95:b6:dd:e9:b1[email protected]The key ' s Randomart image is:+--[RSA 2048]----+| . == || O. + O || .= . || . . + || . S. || . .. || . .+. . || .. Ooo. + ||. O ..... E |+-----------------+
Under the
. ADDNL is optional and is used to specify that a newline character be added at the end of each file.
Ls
How to use: Hadoop fs-ls
If it is a file, the file information is returned in the following format:
File name If it is a directory, it returns a list of its immediate subfolders, as in Unix. The information for the catalog return list is as follows:
Direct
lost for some reason, you can also find backup one (advantage: fast, avoid data transmission in the network), in the near case if the rack is damaged by the scourge, we have a second rack to find data (advantage: Secure)3. Actual Operation HDFs
Run the Hadoo first, start the Hadoop code [root@master~]#suhadoop//switch to Hadoop user [hadoop@masterroot]$cd/us
FS Shell
Cat
Chgrp
chmod
Chown
Copyfromlocal
Copytolocal
Cp
Du
Dus
Expunge
Get
Getmerge
Ls
Lsr
Mkdir
Movefromlocal
Mv
Put
Rm
RMr
Setrep
Stat
Tail
Test
Text
Touchz
FS ShellThe call file system (FS) shell command should use the form Bin/hadoop
Original address: http://hadoop.apache.org/docs/r1.0.4/cn/hdfs_shell.html
FS Shell
Cat
Chgrp
chmod
Chown
Copyfromlocal
Copytolocal
Cp
Du
Dus
Expunge
Get
Getmerge
Ls
Lsr
Mkdir
Movefromlocal
Mv
Put
Rm
RMr
Setrep
Stat
Tail
Test
Text
Touchz
FS ShellThe call file system (FS) she
design documentation for more information about the properties of the Recycle Bin.GetHow to use: Hadoop fs-get [-IGNORECRC] [-CRC] Copy the file to the local file system. The-IGNORECRC option can be used to replicate the failed file for CRC validation. Use the -CRC option to copy files and CRC information. Example:
Hadoop fs-get/user/hadoop/file LocalFi
to use: Hadoop fs-expungeEmpty the Recycle Bin. Refer to the HDFS design documentation for more information about the properties of the Recycle Bin. GetHow to use: Hadoop fs-get [-IGNORECRC] [-CRC] Copy the file to the local file system. The-IGNORECRC option can be used to replicate the failed file for CRC validation. Use the-CRC option to copy files and CRC information.Example:
. Therefore, the first operation we want to introduce is to write data to the cluster. Assume that the user is "someone", which depends on your actual situation. Actions can be performed on any machine that can access the cluster, where the conf/hadoop-site.xml file must be set to the namenode address in the cluster. The command can be run in the installation directory. The installation directory can be/home/someone/src/
inputmkdir: ' input ': No such file or directoryPlus/Successful put after the execution of WordCount, still error, Input Path does not exist:hdfs://hadoopmaster:9000/user/hadoop/inputHere the Input folder path is/user/hadoop/input, but my path is/usr/local/input, is not this reason cause cannot find the pathRefer to the answer in Http://stackoverflow.com/questions/20821584/
returns-1.9:dusHow to use: Hadoop fs-dus Displays the size of the file.10:expungeHow to use: Hadoop fs-expungeEmpty the Recycle Bin. Refer to the HDFs design documentation for more information about the properties of the Recycle Bin.11:getHow to use:Hadoop fs-get [-IGNORECRC] [-CRC] Copy the file to the local file system. The-IGNORECRC option can be used to replicate the failed file for CRC validation. Use
first, enter the ~/hadoopinstall/hadoop directory, and execute the following command[Dbrg@dbrg-1:hadoop] $bin/hadoop Namenode-formatNo surprises, you should be prompted to format successfully. If it doesn't work, go to the hadoop/logs/directory to view the log fileNow it's time to officially start
Not much to say, directly on the dry goods!GuideInstall Hadoop under winEveryone, do not underestimate win under the installation of Big data components and use played Dubbo and disconf friends, all know that in win under the installation of zookeeper is often the Disconf learning series of the entire network the most detailed latest stable disconf deployment (based on Windows7 /8/10) (detailed) Disconf Learning series of the full network of the lates
Environment[Email protected] soft]#Cat/etc/Issuecentos Release6.5(Final) Kernel \ r \m[[email protected] soft]#uname-Alinux vm80282.6. +-431. el6.x86_64 #1SMP Fri Nov A Geneva: the: theUtc -x86_64 x86_64 x86_64 gnu/Linux[[email protected] soft]# Hadoop versionhadoop2.7.1Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git-r 15ecc87ccf4a0228f35af08fc56de536e6ce657aCompiled by Jenkins on -- .-29t06:04zcompiled with Protoc2.5.0From source with c
on the master machine.2. Start the Distributed File servicesbin/start-all.shOrsbin/start-dfs.shsbin/start-yarn.shUse your browser to browse the master node machine http://192.168.23.111:50070, view the Namenode node status, and browse the Datanodes data node.Use your browser to browse the master node machine http://192.168.23.111:8088 See all apps.3. Close the Distributed File servicesbin/stop-all.sh4. File ManagementTo create the SWVTC directory in HDFs, the Operation command is as follows.[Em
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.