The inputformat class is used to process map input data. At the beginning of the task, inputformat splits all the data in the HDFS input files into logical inputspilt objects.
Here, split is the logical division of block parts in HDFS or the entire block or several fast data. A split corresponds to a map, so the number of maps is determined by the number of splits.
How can we determine the number of inputspilt? The configuration parameters related to the number of splits are listed below:
Numsplits: from job. getnummaptasks (), that is, org. apache. hadoop. mapred. jobconf. setnummaptasks (int n) sets the value to give a prompt for the number of maps in the M-R framework.
Minsplitsize: The default value is 1, which can be reset by the sub‑class rewrite function protected void setminsplitsize (long minsplitsize.Generally, all values are 1, except in special cases..
Blocksize: The block size of HDFS. The default value is 64 mb. Generally, the size of HDFS is set to 128 MB.
long goalSize = totalSize / (numSplits == 0 ? 1 : numSplits);long minSize = Math.max(job.getLong("mapred.min.split.size", 1), minSplitSize);for (FileStatus file: files) { Path path = file.getPath(); FileSystem fs = path.getFileSystem(job); if ((length != 0) && isSplitable(fs, path)) { long blockSize = file.getBlockSize(); long splitSize = computeSplitSize(goalSize, minSize, blockSize); long bytesRemaining = length; while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) { String[] splitHosts = getSplitHosts(blkLocations,length-bytesRemaining, splitSize, clusterMap); splits.add(new FileSplit(path, length-bytesRemaining, splitSize, splitHosts)); bytesRemaining -= splitSize; } if (bytesRemaining != 0) { splits.add(new FileSplit(path, length-bytesRemaining, bytesRemaining, blkLocations[blkLocations.length-1].getHosts())); } } else if (length != 0) { String[] splitHosts = getSplitHosts(blkLocations,0,length,clusterMap); splits.add(new FileSplit(path, 0, length, splitHosts)); } else { //Create empty hosts array for zero length files splits.add(new FileSplit(path, 0, length, new String[0])); }}return splits.toArray(new FileSplit[splits.size()]);protected long computeSplitSize(long goalSize, long minSize, long blockSize) { return Math.max(minSize, Math.min(goalSize, blockSize));}
This is the hadoop source code about the number of splits.