Recently studying Hadoop, using the hadoop2.6.0
Then, when learning to write a mapreduce program, it is found that the default input to the file is divided by each line, the following analysis of the way to change this segmentation:
Let's see how the default is implemented:
If you do not use the job's setinputformatclass () setting, the default InputFormat class is to use the Textinputformat class
Textinputformat class is inherited from Fileinputformat
Fileinputformat implements the InputFormat interface
As you can see in Textinputformat, the Linerecordreader class is called in the Getrecordreader function. We note that there is a delimiter parameter in the passed in parameter, which is used to specify the delimiter (you can view The implementation of the linerecordreader in the implementation of the file segmentation), So we can define a class Myinputformat inherit the Fileinputformat class and then
String delimiter = context.getconfiguration (). Get (
"Textinputformat.record.delimiter");
Instead:String delimiter = "END";
The "END" is the specified delimiter.
Then in the program, set the InputFormat class to Myinputformat.class in the job.
Custom Split mode in hadoop2.6.0