Reprinted please indicate the source: http://blog.csdn.net/lastsweetop/article/details/9086695
The previous articles talked about single-threaded operations. To copy many files in parallel, hadoop provides a small tool, distcp. The most common usage is to copy files between two hadoop clusters, the help documentation is very detailed. I will not explain it here. There are no two clusters in the development environment. The same cluster is used for Demonstration: hadoop distcp HDFS: // namenode: 9000/user/hadoop/input HDFS: // namenode: 9000/user/hadoop/input1 complete options list:
distcp [OPTIONS] <srcurl>* <desturl>OPTIONS:-p[rbugp] Preserve status r: replication number b: block size u: user g: group p: permission -p alone is equivalent to -prbugp-i Ignore failures-log <logdir> Write logs to <logdir>-m <num_maps> Maximum number of simultaneous copies-overwrite Overwrite destination-update Overwrite if src size different from dst size-skipcrccheck Do not use CRC check to determine if src is different from dest. Relevant only if -update is specified-f <urilist_uri> Use list at <urilist_uri> as src list-filelimit <n> Limit the total number of files to be <= n-sizelimit <n> Limit the total size to be <= n bytes-delete Delete the files existing in the dst but not in src-mapredSslConf <f> Filename of SSL configuration for mapper task
Looking at the execution results of distcp, you will find that distcp is a mapreduce task, but only map does not have CER Cer.
13/06/18 10:59:19 INFO tools.DistCp: srcPaths=[hftp://namenode:50070/user/hadoop/input]13/06/18 10:59:19 INFO tools.DistCp: destPath=hdfs://namenode:9000/user/hadoop/input113/06/18 10:59:20 INFO tools.DistCp: hdfs://namenode:9000/user/hadoop/input1 does not exist.13/06/18 10:59:20 INFO tools.DistCp: sourcePathsCount=313/06/18 10:59:20 INFO tools.DistCp: filesToCopyCount=213/06/18 10:59:20 INFO tools.DistCp: bytesToCopyCount=1.7m13/06/18 10:59:20 INFO mapred.JobClient: Running job: job_201306131134_000913/06/18 10:59:21 INFO mapred.JobClient: map 0% reduce 0%13/06/18 10:59:35 INFO mapred.JobClient: map 100% reduce 0%
Distcp distributes a large number of files evenly to map for execution. Each file has a single map task. Which of the following maps will be used by default? First, the average score is 256 MB. If the total size is lower than MB, distcp will only allocate one map. However, if the number of maps on each node is greater than 20, the number of maps on each node is calculated as 20:You can manually set it through-M. For HDFS balancing, it is best to allocate more maps to block. If the versions of the two clusters are different, HDFS may cause an error because the RPC system is incompatible. At this time, you can use the HTTP-based hftp protocol, but the target address must also be HDFS, such as: hadoop distcp hftp: // namenode: 50070/user/hadoop/input HDFS: /namenode: 9000/user/hadoop/input1
We recommend that you use the hftp alternative protocol webhdfs. Both the source address and target address can use webhdfs for full compatibility.
hadoop distcp webhdfs://namenode:50070/user/hadoop/input webhdfs://namenode:50070/user/hadoop/input1
Thanks to Tom White, most of this article is from the definitive guide of the great god. However, the Chinese translation is too bad, so I will add some understanding to some official documents on the basis of the original English version. It's all about reading notes.