MapReduce programming Series 10 uses HashPartitioner to adjust the computing load of CER Cer.
Example4 demonstrates how to specify the number of reducers. This section describes how to use HashPartitioner to group Mapper output by key and then hand it to Cer CER for processing. A reasonable grouping policy makes the computing load of each Reducer not much different, so that the overall reduce performance is more balanced.
The number of reducers is determined by the return value of the HashPartitioner function getPartition.
public int getPartition(K2 key, V2 value, int numReduceTasks) {return (key.hashCode() & Integer.MAX_VALUE) & numReduceTasks;}
The code above indicates that the remainder is obtained after dividing the hash code of the key by the 31 power of 2, and then dividing the remainder by the number of reducers, and then the remainder. The result is the partition number corresponding to the key.
The reason is that Integer. MAX_VALUE is the 31 Power-1 of 2. If a number is equal to the N power of 2 and the N power-1 of 2, the remainder is obtained.
Refer to my documents:
Http://blog.csdn.net/csfreebird/article/details/7355282
All the keys that are calculated to belong to the same partition and their values are sent to the corresponding reducer for processing.
The conclusion is as follows:
Partitioner does not change the number of reducers, but determines the <key, value> to which group, thus changing the volume of data processed by the reducers.
My example5 adopts hash partitioner. On the basis of example4, only one line of code of LogJob. java is modified:
job.setPartitionerClass(HashPartitioner.class);
If this parameter is not set, Hadoop uses HashPartitioner by default.