Details on how to control the number of Hive maps

Source: Internet
Author: User

Who determines the number of Hive maps or the number of MAPREDUCE maps? Inputsplit size, how is the inputsplit size calculated? This is the key to adjust the number of maps.
Hadoop provides the Inputformat interface to describe the format of input data. One of the key methods is getSplits, which partitions the input data.
Hive encapsulates InputFormat:

The specific implementation is determined by the hive. input. format parameter, which mainly uses the HiveInputFormat and CombineHiveInputFormat types in 2.
For HiveInputFormat:


Public InputSplit [] getSplits (JobConf job, int numSplits) throws IOException {
// Scan each partition
For (Path dir: dirs ){
PartitionDesc part = getPartitionDescFromPath (pathToPartitionInfo, dir );
// Obtain the input format of the partition
Class inputFormatClass = part. getInputFileFormatClass ();
InputFormat inputFormat = getInputFormatFromCache (inputFormatClass, job );
// Obtain parts based on the sharding algorithm of the corresponding format
// Note: The Inputformat here is only the old version API: org. apache. hadoop. mapred instead of org. apache. hadoop. mapreduce. Therefore, you cannot use new APIs. Otherwise, an exception is reported during query: Input format must implement InputFormat. the difference is that the calculation of inputsplit size (Math. max (minSize, Math. min (maxSize, blockSize) and old (Math. max (minSize, Math. min (goalSize, blockSize) is different;
InputSplit [] iss = inputFormat. getSplits (newjob, numSplits/dirs. length );
For (InputSplit is: iss ){
// Encapsulate the result and return
Result. add (new HiveInputSplit (is, inputFormatClass. getName ()));
}
}
Return result. toArray (new HiveInputSplit [result. size ()]);
}

 

The computation of CombineHiveInputFormat is complicated:


Public InputSplit [] getSplits (JobConf job, int numSplits) throws IOException {
// Load CombineFileInputFormatShim. This class inherits org. apache. hadoop. mapred. lib. CombineFileInputFormat
CombineFileInputFormatShim combine = ShimLoader. getHadoopShims ()
. GetCombineFileInputFormat ();
If (combine = null ){
// If it is null, HiveInputFormat is used.
Return super. getSplits (job, numSplits );
}
Path [] paths = combine. getInputPathsShim (job );
For (Path path: paths ){
// If it is an external table, the partitions are based on HiveInputFormat.
If (tableDesc! = Null) & tableDesc. isNonNative ()){
Return super. getSplits (job, numSplits );
}
Class inputFormatClass = part. getInputFileFormatClass ();
String inputFormatClassName = inputFormatClass. getName ();
InputFormat inputFormat = getInputFormatFromCache (inputFormatClass, job );
If (this. mrwork! = Null &&! This. mrwork. getHadoopSupportsSplittable ()){
If (inputFormat instanceof TextInputFormat ){
If (new CompressionCodecFactory (job). getCodec (path )! = Null)
// If the hive. hadoop. supports. splittable. combineinputformat (MAPREDUCE-1597) parameter is not enabled, the HiveInputFormat fragment algorithm is used for TextInputFormat and is compressed.
Return super. getSplits (job, numSplits );
}
}
// Connect the same as above
If (inputFormat instanceof SymlinkTextInputFormat ){
Return super. getSplits (job, numSplits );
}
CombineFilter f = null;
Boolean done = false;
Path filterPath = path;
// The hive. mapper. cannot. span. multiple. partitions control. The default value is false. If the value is not true, a pool is created for each partition. The following is omitted as true; create a pool for split in the same file format of the same table for combine;
If (! Mrwork. isMapperCannotSpanPartns ()){
OpList = HiveFileFormatUtils. doGetWorksFromPath (
PathToAliases, aliasToWork, filterPath );
F = poolMap. get (new CombinePathInputFormat (opList, inputFormatClassName ));
}
If (! Done ){
If (f = null ){
F = new CombineFilter (filterPath );
Combine. createPool (job, f );
} Else {
F. addPath (filterPath );
}
}
}
If (! Mrwork. isMapperCannotSpanPartns ()){
// The combine sharding algorithm is called only here, inheriting from the latest version of org. apache. hadoop. mapred. lib. CombineFileInputFormat extends. CombineFileInputformat
Iss = Arrays. asList (combine. getSplits (job, 1 ));
}
// Special processing for the sample Query
If (mrwork. getNameToSplitSample ()! = Null &&! Mrwork. getNameToSplitSample (). isEmpty ()){
Iss = sampleSplits (iss );
}
// Encapsulate the returned result
For (InputSplitShim is: iss ){
CombineHiveInputSplit csplit = new CombineHiveInputSplit (job, is );
Result. add (csplit );
}
Return result. toArray (new CombineHiveInputSplit [result. size ()]);
}

For more details, please continue to read the highlights on the next page:

Hive details: click here
Hive: click here

Hadoop cluster-based Hive Installation

Differences between Hive internal tables and external tables

Hadoop + Hive + Map + reduce cluster installation and deployment

Install in Hive local standalone Mode

WordCount word statistics for Hive Learning

  • 1
  • 2
  • Next Page

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.