Chapter 2 mapreduce Introduction
An ideal part size is usually the size of an HDFS block. The execution node of the map task and the storage node of the input data are the same node, and the hadoop performance is optimal (Data Locality optimization, avoid data transmission over the network ).
Mapreduce Process summary: reads a row of data from a file, map function processing, Return key-value pairs; the system sorts the map results. If there are multiple reducers, the map task will partition the output, create a partition for each reduce task. If a combiner is specified, the combiner is run after map, and the result of combiner is passed to Cer. combiner can reduce the amount of data transmitted between map and reduce. Reducer first needs to shuffle the received data, and then run the reducer function to return the result. (For details, refer to the description and diagram in section 2.4 .) CodeSee the code 2.3.2.
For details, see hadoop Study Summary 3: Map-Reduce
Chapter 3 hadoop Distributed File System
For details, refer to one of the hadoop learning summaries: HDFS introduction (ZZ is well written)