How to control the number of maps in MapReduce under the Hadoop framework

Source: Internet
Author: User

The core source of control map number

1 LongMinSize =Math.max (Getformatminsplitsize (), getminsplitsize (Job));2  3 //getformatminsplitsize Default Returns the minimum number of shards set by 1,getminsplitsize for the user, and the minimum number of shards set for the user if the user is set to greater than 14 LongMaxSize =getmaxsplitsize (Job);5  6 //getmaxsplitsize Maximum number of shards set for user, default maximum is long 9223372036854775807L7  8 LongSplitsize =computesplitsize (blockSize, MinSize,9 maxSize);Ten   One protected LongComputesplitsize (LongBlockSize,LongMinSize,LongmaxSize) { A         returnMath.max (MinSize, Math.min (MaxSize, blockSize)); -}

The code above can be seen in the

MaxSize default equals long (longer shaping)

BlockSize defaults to 128M after hadoop2.0

MinSize default equals 1

So the default tile size splitsize equals 128M, which is equal to the block size

A slice corresponds to a map task, so a block corresponds to a map task by default.

In order to control the number of maps can be from MinSize and maxsize start.

To increase the number of maps, you can adjust the maxsize to less than blockSize; To reduce the number of maps, you can adjust the minsize>blocksize.

Specific adjustments can be configured to add the following configuration in the job configuration

   Fileinputformat.setmininputsplitsize (Job, 301349250); // Set MinSize   Fileinputformat.setmaxinputsplitsize (Job, 10000); // Set MaxSize

In the experiment,

Test file size 297M (311349250)

Block Size 128M

Test code

Fileinputformat.setmininputsplitsize (Job, 301349250);

Fileinputformat.setmaxinputsplitsize (Job, 10000);

After the test map number is 1, by the above Shard formula to calculate the Shard size of 301349250, than 311349250 small, the theory should be two map, this is why? On the source

 while (Bytesremaining/splitsize > 1.1D) {                        int blkindex = getblockindex (blklocations, length                                 - bytesremaining);                         - bytesremaining,                                splitsize, Blklocations[blkindex].gethosts ()));                         -= splitsize;                    }

It can be seen that as long as the remaining file size does not exceed 1.1 times times the Shard size, it will be divided into a shard, avoid opening two map, one of the running data is too small, wasting resources.

Summary, the Shard process is about, first traverse the target file, filter some non-conforming files, and then add to the list, and then follow the file name to slice the Shard (the size of the previous calculation of the size of the formula, the end of a file may be merged, in fact, often write network programs know), and then add to the Shard list, Each shard then reads its own corresponding section to map processing

How to control the number of maps in MapReduce under the Hadoop framework

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.