Hive merge (small file merge ),
When the Hive input is composed of many small files, if file merging is not involved, a map task is started for each small file.
If the file size is too small, so that the start and initialization time of the map task is greater than the logical processing time, it may cause a waste of resources or even an OutOfMemoryError.
Therefore, when starting a task, if you find that the input data volume is small but the number of tasks is large, you need to note that the input small file merge operation is performed on the front-end of Map.
Similarly, when writing data to a table, observe the reduce quantity and output file size.
1. Merge Map input small files
# The maximum size of input files processed by each Map (256 MB)
Set mapred. max. split. size = 256000000;
# Minimum value of the split file on a node
Set mapred. min. split. size. per. node = 100000000;
# Minimum value of the split file under a vswitch
Set mapred. min. split. size. per. rack = 100000000;
# Merge small files before Map execution
Set hive. input. format = org. apache. hadoop. hive. ql. io. CombineHiveInputFormat;
After org. apache. hadoop. hive. ql. io. CombineHiveInputFormat is enabled,
Multiple small files on a data node are merged. The number of merged files is determined by the size limit of mapred. max. split. size.
Mapred. min. split. size. per. node determines whether files on multiple data nodes need to be merged.
Mapred. min. split. size. per. rack determines whether files on multiple switches need to be merged.
2. Merge output files
# At the end of the Map-Only task, small files will be merged.
Set hive. merge. mapfiles = true;
# Merge small files at the end of a MapR-educe task
Set hive. merge. mapredfiles = true; (default value: false)
# Size of the merged File
Set hive. merge. size. per. task = 256*1000*1000;
# When the average size of the output file is smaller than this value, start an independent map-reduce task to perform the file merge
Set hive. merge. smallfiles. avgsize = 16000000;