Hadoop shell command operation, enter the Hadoop carriage return can be found:
The commonly used commands are:
Hadoop Namenode-format This is the command used to format the file system before starting Hadoop.
Hadoop Dfsadmin This is the management command for Hadoop, and we can see the detailed commands after entering Hadoop dfsadmin carriage return such as:
Common commands for Hadoop dfsadmin are:
1): Hadoop dfsadmin-report View the running status of Hadoop
2): Hadoop Dfsadmin-safemode Enter | Leave | Get | Wait to set the security mode for Hadoop (file uploads and modifications are not allowed in Safe mode)
3): Hadoop dfsadmin-refreshnodes Refresh nodes (re-read hosts and exclude files, so that new nodes and the node that exits the cluster can be namenode re-identified, this command is used when adding or unregistering nodes)
4): Hadoop dfsadmin-help View command help, such as: Hadoop dfsadmin-help safemode.
Hadoop fsck checks the health of the Hadoop file system, views the data block where the file resides, deletes the broken block, finds the missing block, and enters Hadoop fsck as follows:
Example of the Hadoop fsck command:
Hadoop Fsck/lavimer/liao.txt-file-blocks View the block and health status of the file, with the following results:
Hadoop Balancer Disk equalization.
Hadoop jar runs jar package, for example: Hadoop jar liao.jar parameter 1 parameter 2
Hadoop Archive file Archive, this command is very useful, Hadoop can use this command to solve the problem of processing many small files.
For example: Hadoop archive-archivename liao.har-p/usr///folder All files in/usr directory (this is the root directory of Hadoop)
View the internal structure of the Har package after archiving, using the Hadoop fs-lsr/liao.har command,
If you want to view the archived files for more detailed content, you can use: Hadoop FS-LSR Har:///liao.har (three slashes), the results are as follows:
Here's one of the most important shell commands in Hadoop, the shell Operation command for HDFS!
Since HDFs is a distributed file system for accessing data, the operation of HDFs is the basic operation of the file system, such as file creation, modification, deletion, modification permissions, folder creation, deletion, renaming, etc. The operations command for HDFS is similar to the operation of the Linux shell on files such as LS, mkdir, RM, and so on.
When working with HDFs, make sure that Hadoop is working and that you can use the JPS command to ensure that you see each hadoop process,
Note: As shown, using the JPS command to see the five processes of Hadoop-Namenode, DataNode, Secondarynamenode, Tasktrackers, Jobtracker-is started, which means that Hadoop started successfully.
We execute the Hadoop FS command to view the relevant commands for HDFs,
A lot of command options are displayed, not all of them, I have a complete list of supported command options in the table:
Note: The path in the table above includes the path in HDFs and the path in Linux. For areas prone to ambiguity, the "Linux path" or "HDFs path", if not explicitly stated, means that it is an HDFS path.
Here is a detailed explanation of each of the commands in the above table:
-ls Show current directory structure
This command option represents the current directory structure for viewing the specified path, followed by the HDFs path, such as:
The path in is the HDFs root, and the content format displayed is very similar to the content format displayed by the Linux command ls-l, which explains the meaning of each column:
1. The initial letter indicates the file type (d means directory,-normal file)
2. the next 9 bits are the permission bit, each of the 3 groups (the first group is the permission of the file owner; The second group has permissions for the file, and the third group represents the permissions of the other person, such as the first row of Drwxr-xr-x indicates that this is a directory, the directory owner has rwx permissions The directory has a group with RX permission, others have RX permission).
3. the number after the permission bit or "-" indicates the number of copies. If it is a file, use a number to represent the number of copies;
4. the following "root" and "Liao" indicate that the owning owner is the owner.
5. "supergroup" after the owner indicates the owning group.
6. the 0 that follows the owning group represents the size of the file, in bytes.
7. After the file size is the modified time, the format is the month and day hours.
8. the last path that represents the file.
If there is no path behind Hadoop fs-ls, then the/user/current user directory is accessed. We log in with the root user and therefore access the/user/root directory of HDFs, such as:
Note: If you do not have a/user/root directory, you will be prompted for errors that do not exist.
-LSR recursive display of directory structure
The command option indicates that the directory structure of the current path is recursively displayed, followed by the HDFs path, as shown in:
Note: The above command shows the contents of the HDFs root directory recursively
-du Statistics directory next file size
The command option displays the file size under the specified path, in bytes, as shown in:
-dus Summary Statistics directory file size
The command option displays the file size of the specified path, in bytes, as shown in:
-count number of statistical files (clips)
This command option displays the number of folders, the number of files, and the total file size information under the specified path, as shown in:
Note: 7 indicates the number of folders, 1 indicates the number of files, and 4 indicates a total of 4kb files.
-MV Mobile
This command option represents moving HDFs files into the specified directory. followed by two paths, the first represents the source file, the second represents the destination directory, as shown in:
Note: The above 3 commands show the changes before and after the move.
-CP replication
This command option represents the replication of HDFs-specified files into the specified HDFs directory, followed by two paths, the first is the copied file, and the second is the destination, such as:
Note: The above 3 commands reflect the changes before and after the file movement.
-RM Delete files/blank folders
The command option means to delete the specified file or an empty directory, such as:
Note: The last command Hadoop fs-rm/user/root means to delete the directory, but the directory has content, so it cannot be deleted, if the directory is empty, then you can delete.
-RMR Recursive deletion
This command option means to recursively delete the specified directory and all subdirectories and files under that directory, such as:
Note: The recursive deletion above indicates that all content under the user directory is deleted from the HDFs root directory.
-put Uploading Files
This command option means that files on Linux are copied to HDFs, as indicated by:
Note: The above command reflects the changes before and after the file upload.
-copyfromlocal copying from local
The operation is consistent with-put, such as:
-movefromlocal moving from local
This command means moving files from Linux to HDFs, such as:
-getmerge merging to Local
The meaning of this command option is to merge all the file contents in the specified HDFs directory into a local Linux file, such as:
-cat viewing the contents of a file
The command option is to view the contents of the file, such as:
-text viewing the contents of a file
The command can be thought of as the same as-cat, such as:
-mkdir Creating a blank folder
This command option indicates that the folder is created, followed by the folder that will be created in HDFs, such as:
-setrep setting the number of replicas
The command option is to modify the number of copies of the saved file, followed by the number of copies followed by the file path, such as:
Note: We have changed the number of copies of the/liao.txt file from 1 to 3, which means that there are two more copies, and HDFs automatically performs the copy of the file, generating a new copy.
If the final path is a folder, then the option-r is required to modify the copy for all files in the folder, such as:
Note: When we set the number of replicas for a directory, all the files under that directory will have an effect.
-touchz Creating a blank folder
The command option is to create a blank folder in HDFs, as shown in:
-stat displaying statistics for files
This command option displays some statistics about the file, such as:
Note: The command option can be formatted with single quotation marks. The format in the example above: '%b%n%o%r%Y ' in turn indicates file size, file name, block size, number of replicas, access time.
-tail viewing the contents of a file trailer
This command option displays the contents of the last 1k bytes of the file. Typically used to view logs. If you have the option-F, the contents of the file will be displayed automatically when it changes, such as:
-chmod Modifying file permissions
This command option uses the chmod usage in a Linux-like Shell to modify the permissions of a file, such as:
Add option-r to modify permissions for all files in the folder, as shown in:
-chown Modify the owning master
This command option indicates the owner of the modified file, as shown in:
Note: The above command changes the file that belongs to the main root to the owner of the Lavimer. In addition, if you have option-r, you can recursively modify the information of the owning and owning group for all files in the folder.
-CHGRP Modifying the owning group
The purpose of this command is to modify the group to which the file belongs, as shown in:
-help Help
The command option displays the Help information, followed by the command options that need to be queried, such as:
Note: The above command is for querying RM usage.
The contents of the-help command are not completely accurate, for example, the result of the query count is incorrect, but the use of all command options is displayed, such as:
Hope that the future version will be corrected!
Shell Operations for Hadoop