Map Quantity Control in mapreduce

Source: Internet
Author: User
The inputformat class is used to process map input data. At the beginning of the task, inputformat splits all the data in the HDFS input files into logical inputspilt objects.

Here, split is the logical division of block parts in HDFS or the entire block or several fast data. A split corresponds to a map, so the number of maps is determined by the number of splits.

How can we determine the number of inputspilt? The configuration parameters related to the number of splits are listed below:

Numsplits: from job. getnummaptasks (), that is, org. apache. hadoop. mapred. jobconf. setnummaptasks (int n) sets the value to give a prompt for the number of maps in the M-R framework.

Minsplitsize: The default value is 1, which can be reset by the sub‑class rewrite function protected void setminsplitsize (long minsplitsize.Generally, all values are 1, except in special cases..

Blocksize: The block size of HDFS. The default value is 64 mb. Generally, the size of HDFS is set to 128 MB.

 

long goalSize = totalSize / (numSplits == 0 ? 1 : numSplits);long minSize = Math.max(job.getLong("mapred.min.split.size", 1), minSplitSize);for (FileStatus file: files) {  Path path = file.getPath();  FileSystem fs = path.getFileSystem(job);  if ((length != 0) && isSplitable(fs, path)) {     long blockSize = file.getBlockSize();    long splitSize = computeSplitSize(goalSize, minSize, blockSize);        long bytesRemaining = length;    while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) {      String[] splitHosts = getSplitHosts(blkLocations,length-bytesRemaining, splitSize, clusterMap);      splits.add(new FileSplit(path, length-bytesRemaining, splitSize, splitHosts));      bytesRemaining -= splitSize;    }    if (bytesRemaining != 0) {      splits.add(new FileSplit(path, length-bytesRemaining, bytesRemaining, blkLocations[blkLocations.length-1].getHosts()));    }  } else if (length != 0) {    String[] splitHosts = getSplitHosts(blkLocations,0,length,clusterMap);    splits.add(new FileSplit(path, 0, length, splitHosts));  } else {     //Create empty hosts array for zero length files    splits.add(new FileSplit(path, 0, length, new String[0]));  }}return splits.toArray(new FileSplit[splits.size()]);protected long computeSplitSize(long goalSize, long minSize, long blockSize) {    return Math.max(minSize, Math.min(goalSize, blockSize));}


 

This is the hadoop source code about the number of splits.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.