What is combiner Functions
“Many MapReduce jobs are limited by the bandwidth available on the cluster, so it pays to minimize the data transferred between map and reduce tasks. Hadoop allows the user to specify a combiner function to be run on the map output—the combiner function’s output forms the input to the reduce function. Since the combiner function is an optimization, Hadoop does not provide a guarantee of how many times it will call it for a particular map output record, if at all. In other words, calling the combiner function zero, one, or many times should produce the same output from the reducer.” -- 《Hadoop: The Definitive Guide》
To put it simply, combiner is a function that runs after Mapper and is very similar to the CER function. Therefore, in hadoop in action, it is also called "Local reduce ". It reduces network data transmission and improves performance. But because it is an optimization function, hadoop does not guarantee to run it.
In fact, there is a more in-depth design question. Here is a assumption that everyone tends to be fat Mapper and slim reducer. In general, we will try to implement complex logic and operations in mapper as much as possible. In CER, we just do simple aggregation. This is why there is a combiner on the Mapper side instead of the combiner on the CER side.
Combiner of CER
Is it possible to imagine that a project requires multiple er and reducer, and the business logic must be implemented on the reducer end. For example, the input log contains the following fields: User ID, country, and timestamp. Users in different countries need to be countedAccess time(Defined as the last access timestamp to go to the first access timestamp ). The log is ~ 10g/hour, but the analysisOne day.
Here, the same user will appear in any hour of the day, so the same user must be aggregated to calculate the access time. Obviously, this function cannot be implemented on the Mapper side. Users with the same user ID are sorted by partition key and then aggregated to the CER end. Use the following standard template (for example, Perl ):
while ( my $line = <STDIN> ) {chomp($line);( $user_id, $country, $timestamp ) = split( /\t/, $line );# set base key$key = $base_key;if ($cur_key) {if ( $key ne $cur_key ) {&onEndKey();&onBeginKey();}&onSameKey();}else {&onBeginKey();&onSameKey();}}if ($cur_key) {&onEndKey();}
Here we need another mapper/reducer to gather the time of different countries. If the first reducer end can have a combiner, it will greatly reduce network transmission and even avoid the out of memory problem (a separate article will be written ). Because the existing hadoop does not have this function, you can only come by yourself. The simplest implementation is to provide a global hash for aggregation. Because in essence, combiner or reducer is actually a hash.
The specific implementation is omitted. Here is just a design idea. You are welcome to discuss it together.