At the beginning, people thought that only one reduce is enough for mapreduce programs. After all, before you process data, a reducer has already divided the data into good classes. Who does not like classified data. However, we ignore the advantages of parallel computing. If there is only one reducer, our cloud computing will degrade into a light rain.
When there are multiple reducers, we need a mechanism to control the allocation of mapper results. This is the work of partitioner.
By default, hadoop allocates keys by comparing their hash values. By default, hashpartitioner is used. Sometimes the default functionality does not meet our requirements, such as the edge class we previously customized (http://blog.csdn.net/on_way_/article/details/8589187 ). When we want to know the number of passengers taking off at each airport. We have the following data:
(Beijing, Shanghai) James
(Beijing, Qingdao) Li Si .......
If we use hashpartitioner for allocation, the two rows will be sent to different reducers, and the number of airport departures will be calculated twice, and each time it is wrong.
We need to customize a partitioner for our application.
[Java]View plaincopyprint?
- Import org. Apache. hadoop. Io. writable;
- Import org. Apache. hadoop. mapred. jobconf;
- Import org. Apache. hadoop. mapred. partitioner;
- Public class edgepartitioner implements partitioner <edge, writable> {
- @ Override
- Public void configure (jobconf job ){
- // Todo auto-generated method stub
- }
- @ Override
- Public int getpartition (edge key, writable value, int numpartitions ){
- // Todo auto-generated method stub
- Return key. getdeparturenode (). hashcode () % numpartitions;
- }
- }
The following figure illustrates partitioner.
Between the map and reduce stages, a mapreduce program must allocate mapper output to multiple reducers. This process is called shuffling, because the output result of a Mapper may be allocated to multiple nodes in the cluster.