This document describes how to operate a hadoop file system through experiments.
Complete release directory of "cloud computing distributed Big Data hadoop hands-on"
Cloud computing distributed Big Data practical technology hadoop exchange group:312494188Cloud computing practices will be released in the group every day. welcome to join us!
First, let's look at some common hadoop file system operation commands:
The first common command: hadoop FS-ls
For example, run the following command to list the files and folders in the root directory of the file system:
Second common command: hadoop FS-mkidr
For example, use the following command to create a sub-directory under the root directory of HDFS. The specific effect is shown in:
Third common command: hadoop FS-Get
For example, run the following command to copy the jialingege folder in the root directory of HDFS to a local directory. The specific effect is shown in:
Fourth common command: hadoop FS-put srcfile/desfile
For example, run the following command to copy the stop-all.sh in the current directory to the root directory in HDFS, as shown in
Hadoop file system more operation commands open the official website file system operation commands page: http://hadoop.apache.org/docs/stable/file_system_shell.html
It can be seen that many commands are the same as those of the Linux File operating system. For example, the cat command in Linux means to print the content of a file on the screen, the meaning of the file system in hadoop is the same. Note that the file system operation commands in hadoop 1.1.2 have changed, but the official website has not changed accordingly, for example, let's look at the cat command:
RunCompositionThe system is "DFS", but in hadoop 1.1.2 It is "FS". We also saw this in the previous practice of the HDFS command line tool.
For operations on the hadoop file system, follow the official documentation to conduct a small experiment step by step. I will not go into details here.