Hadoop Cluster Merging small files

Source: Internet
Author: User

Hadoop cluster itself is not recommended to store small files, because in the MapReduce program scheduling process, the default map input is not cross-file, if a file is small (much smaller than the size of a block, the current cluster block size is 256M), the scheduling will also generate a map, and a map only processes this small file, so that the MapReduce program executes, in fact, most of the time spent in the scheduling process, rather than the implementation of the MapReduce program process, this will make the execution of the program is very inefficient.

So in the Hadoop cluster should be as far as possible to ensure that the storage of files as large as possible, so that when the file put to the cluster, the file is cut into chunks of size, in the execution of the MapReduce process, the block will be sliced into pieces, and then a slice as a map input, A map processing will be a block size (the input of a map maximum is a block size), so that the execution time of the MapReduce program can be considered as the map program and reduce program execution time, and the scheduling time can be ignored, so that the entire cluster execution efficiency is highest.


There are two main ways to combine small files:

1. Archive small files as har files

2. Using the MR Program to merge small files

For the first method, the archive into the Har package method is as follows:

1. Generate an archive file

hadooparchive-archivename test.har-p/sourcepath/destpath

2. View files after archiving Hadoop Dfs-ls/destpath

3. View the files before archiving

Hadoop dfs-ls Har:///destpath

4. The archive can also be used to calculate the subsequent MapReduce program as in the archive file

Hadoopjar hadoop-examples-1.0.3.jar wordcount har:///tmp/aaa.har/*/tmp/wordcount2

For the second method, you write a method that determines how many reduce is generated based on the size of the input directory, the number of reduce, which is the final number of files.


#!/bin/bash

block=2147483648; #2G

#check input param num
if [[$#-ne 2]]
Then
echo "The usage of this script is:";
echo $ ' t sh merge.sh <input path> <temp ouput path> ';
Exit 1;
Fi

#check input input and output path
If ' Hadoop dfs-test-e $ '
Then
If ' Hadoop dfs-test-e '
Then
echo "The <temp output path> have already exist, please choose anther and try again!";
Exit 1;
Else
#do the merge
Size= ' Hadoop Dfs-dus $ | Awk-f "' {print $} ';
num= ' expr $size/$block ';
if [[$num-eq 0]]
Then
Num=1;
Fi
Hadoop jar Inputtooutput.jar com. Inputtooutput.inputtooutput \
-D input_dir=$1 \
-D output_dir=$2 \
-D reduce_num= $num
if [[$?-EQ 0]]
Then
#delete Source Path
Hadoop DFS-RMR $ >/dev/null
#mv des path to source path
Hadoop DFS-MV $ $;
if [[$?-EQ 0]]
Then
echo "Merge success!";
Fi
Fi
Fi
Else
echo "The <input path> is not a exist, please check and try again!";
Exit 1;
The jar package that FI uses is in the attachment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.