Abstract set the number of maps by the size of the input shard
Map Inputsplit Hadoop
Preface: In the specific implementation of the Hadoop program, we have to set the number of maps according to different circumstances. In addition to setting the maximum number of maps that can be run on each node, we also need to control the number of tasks that actually perform the map operation.
1. How to control the number of map tasks actually running
We know that when the file is uploaded to the HDFs file system, it is cut into different block blocks (the default size is 64MB). However, each map processing block is sometimes not the physical block size of the system. The actual processing of the input block size is based on Inputsplit to set, then inputsplit how to get it?
?
12345 |
InputSplit=Math.max(minSize, Math.min(maxSize, blockSize) 其中:minSize=mapred.min.split.size maxSize=mapred.max.split.size |
We control how many maps are actually used by changing the number of shards in the InputFormat, while controlling how many shards in the InputFormat need to control the size of each Inputsplit shard
2. How to control the size of each split shard
The default input format for Hadoop is Textinputformat, which defines how files are read and how they are fragmented. We open his source file (in the Org.apache.hadoop.mapreduce.lib.input package):
?
1234567891011121314151617181920212223242526272829 |
package org.apache.hadoop.mapreduce.lib.input;
import
org.apache.hadoop.fs.Path;
import
org.apache.hadoop.io.LongWritable;
import
org.apache.hadoop.io.Text;
import
org.apache.hadoop.io.compress.CompressionCodec;
import
org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.SplittableCompressionCodec;
import
org.apache.hadoop.mapreduce.InputFormat;
import
org.apache.hadoop.mapreduce.InputSplit;
import
org.apache.hadoop.mapreduce.JobContext;
import
org.apache.hadoop.mapreduce.RecordReader;
import
org.apache.hadoop.mapreduce.TaskAttemptContext;
public class
TextInputFormat
extends
FileInputFormat<LongWritable, Text> {
@Override
public
RecordReader<LongWritable, Text>
createRecordReader(InputSplit split,
TaskAttemptContext context) {
return
new
LineRecordReader();
}
@Override
protected
boolean isSplitable(JobContext context, Path file) {
CompressionCodec codec =
new
CompressionCodecFactory(context.getConfiguration()).getCodec(file);
if
(
null
== codec) {
return
true
;
}
return
codec
instanceof
SplittableCompressionCodec;
}
}
|
Through the source code, we found that Textinputformat inherited the Fileinputformat, and in Textinputformat, we did not find a specific section for file segmentation, The Textinputformat should be using the Fileinputformat default Inputsplit method. So, we open the source code for Fileinputformat, where we found:
?
12345678910111213 |
public
static
void
setMinInputSplitSize(Job job,
long
size) {
job.getConfiguration().setLong(
"mapred.min.split.size"
, size);
}
public
static
long
getMinSplitSize(JobContext job) {
return
job.getConfiguration().getLong(
"mapred.min.split.size"
, 1L);
}
public static
void
setMaxInputSplitSize(Job job,
long
size) {
job.getConfiguration().setLong(
"mapred.max.split.size"
, size);
}
public
static long
getMaxSplitSize(JobContext context) {
return
context.getConfiguration().getLong(
"mapred.max.split.size"
,Long.MAX_VALUE);
}
|
As we can see, Hadoop implements the definition of mapred.min.split.size and mapred.max.split.size here, with the default values of 1 and the largest of long. Therefore, we can control the size of the Inputsplit shard by simply re-assigning the values to these two values in the program.
3. If we want to set the Shard size to 10MB
Then we can add the following code to the driver section of the MapReduce program:
?
123 |
TextInputFormat.setMinInputSplitSize(job,1024L); //设置最小分片大小 TextInputFormat.setMaxInputSplitSize(job, 1024 × 1024 ×10L); //设置最大分片大小 |
(go) Set the number of maps by the size of the input shard