In-depth hadoop Research: (4) -- distcp

Source: Internet
Author: User

Reprinted please indicate the source: http://blog.csdn.net/lastsweetop/article/details/9086695

The previous articles talked about single-threaded operations. To copy many files in parallel, hadoop provides a small tool, distcp. The most common usage is to copy files between two hadoop clusters, the help documentation is very detailed. I will not explain it here. There are no two clusters in the development environment. The same cluster is used for Demonstration: hadoop distcp HDFS: // namenode: 9000/user/hadoop/input HDFS: // namenode: 9000/user/hadoop/input1 complete options list:
distcp [OPTIONS] <srcurl>* <desturl>OPTIONS:-p[rbugp]              Preserve status                       r: replication number                       b: block size                       u: user                       g: group                       p: permission                       -p alone is equivalent to -prbugp-i                     Ignore failures-log <logdir>          Write logs to <logdir>-m <num_maps>          Maximum number of simultaneous copies-overwrite             Overwrite destination-update                Overwrite if src size different from dst size-skipcrccheck          Do not use CRC check to determine if src is                        different from dest. Relevant only if -update                       is specified-f <urilist_uri>       Use list at <urilist_uri> as src list-filelimit <n>         Limit the total number of files to be <= n-sizelimit <n>         Limit the total size to be <= n bytes-delete                Delete the files existing in the dst but not in src-mapredSslConf <f>     Filename of SSL configuration for mapper task

Looking at the execution results of distcp, you will find that distcp is a mapreduce task, but only map does not have CER Cer.
13/06/18 10:59:19 INFO tools.DistCp: srcPaths=[hftp://namenode:50070/user/hadoop/input]13/06/18 10:59:19 INFO tools.DistCp: destPath=hdfs://namenode:9000/user/hadoop/input113/06/18 10:59:20 INFO tools.DistCp: hdfs://namenode:9000/user/hadoop/input1 does not exist.13/06/18 10:59:20 INFO tools.DistCp: sourcePathsCount=313/06/18 10:59:20 INFO tools.DistCp: filesToCopyCount=213/06/18 10:59:20 INFO tools.DistCp: bytesToCopyCount=1.7m13/06/18 10:59:20 INFO mapred.JobClient: Running job: job_201306131134_000913/06/18 10:59:21 INFO mapred.JobClient:  map 0% reduce 0%13/06/18 10:59:35 INFO mapred.JobClient:  map 100% reduce 0%
Distcp distributes a large number of files evenly to map for execution. Each file has a single map task. Which of the following maps will be used by default? First, the average score is 256 MB. If the total size is lower than MB, distcp will only allocate one map. However, if the number of maps on each node is greater than 20, the number of maps on each node is calculated as 20:

You can manually set it through-M. For HDFS balancing, it is best to allocate more maps to block. If the versions of the two clusters are different, HDFS may cause an error because the RPC system is incompatible. At this time, you can use the HTTP-based hftp protocol, but the target address must also be HDFS, such as: hadoop distcp hftp: // namenode: 50070/user/hadoop/input HDFS: /namenode: 9000/user/hadoop/input1
We recommend that you use the hftp alternative protocol webhdfs. Both the source address and target address can use webhdfs for full compatibility.

hadoop distcp webhdfs://namenode:50070/user/hadoop/input webhdfs://namenode:50070/user/hadoop/input1

Thanks to Tom White, most of this article is from the definitive guide of the great god. However, the Chinese translation is too bad, so I will add some understanding to some official documents on the basis of the original English version. It's all about reading notes.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.