Hadoop shell operation Replication

Source: Internet
Author: User
Tags hadoop fs

Hadoop FS
List all commands
Hadoop FS-help ls
List detailed information about a command
Hadoop DFS-mkdir/data/weblogs
Hadoop DFS-mkdir/data/dir1/data/dir2...
Create a folder that can be recursively created and multiple directories can be created at the same time
Echo "Hello World"> weblog_entries.txt
Create a new local file (new if no file exists)
Hadoop FS-copyfromlocal weblog_entries.txt/data/weblogs
Hadoop FS-put weblog_entries.txt/data/weblogs. Put can replicate the entire folder (after version 1.0)
Copy data from the local file system to HDFS
Hadoop DFS-ls/data/weblogs
List file information in a folder
Hadoop DFS-CAT/data/weblogs /*
Hadoop DFS-CAT/data/weblogs/* | head-1
Hadoop DFS-CAT/data/weblogs/* | tail-1
View File Content
Hadoop DFS-copytolocal/data/weblogs /*/
Hadoop DFS-Get/data/weblogs /*/
Hadoop DFS-getmerge/data/weblogs/*/Name of the merged file (multiple files can be merged into one file and downloaded to the local device. The name of the merged file must be specified)
Hadoop DFS-get-ignorecrc... data verification (CRC) is not performed when the data is copied to the local disk. It is generally used only when the damaged data is copied to the local disk.
Hadoop DFS-get-CRC... copy the data and copy the CRC verification file.
Copy files from HDFS to local
Ensure the availability of local file system space and network transmission speed
Tens of terabytes of data. The transmission rate is 1 Gbit, which consumes 23 hours.
Get cannot copy the entire folder, and put can be completed after 1.0.
-----
Principle:
FS filesystem, each command corresponds to different methods of this class
The default system is the fs. Default. Name attribute configured for the core-site.xml, such as HDFS: // hostname: 9000, that is, the HDFS system is used by default, that is, hadoop FS and hadoop DFS are the same.

The number of mapreduce output files is calculated by mapred. reduce. the value of tasks is determined by job. setnumreducetasks (INT num) to set. This is a client parameter. It is not a cluster parameter and sets different reduce numbers for different jobs.
The default value is 1.
Two recommended numbers:
0.95 * Number of datanode in the cluster * mapred. tasktracker. Reduce. Tasks. Maximum (configure the maximum number of reduce slots available for each jobtracker)
Or
1.75 * Number of jobtrackers in the cluster * mapred. taskreacker. Reduce. Tasks. Maximum
Cause:
0.95 ensure that all reduce tasks can be enabled immediately after map is completed to process map results. Only one wave is required to complete the job.
1.75 enables fast reduce to execute the second wave of reduce again to ensure that the two reduce sets can complete the job and enable the overall load balancing of the job.
 
 
-- Use pig to demonstrate getmerge
First, create the file test. The content is
1 hello
2 World
3 URL
4 Test
5 haha
Upload to the/data/weblogs directory of HDFS

Test. Pig script content
Weblogs = load '/data/weblogs/*'
(
MD5: chararray,
URL: chararray
); # Tab is used by default.
Md5_grp = group weblogs by MD5 parallel 4; # group based on MD5 and set the reduce count to 4
Store md5_grp into '/data/weblogs/md5_group.bcp'; # sets the reduce output directory, which contains four reduce output files

-----
Use the getmerge command to merge the four output files of reduce and download them to the local device.
Hadoop DFS-getmerge/data/weblogs/md5_group.bcp local file name (the file name must be specified)

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.