Learn from http://spark-internals.books.yourtion.com/markdown/4-shuffleDetails.html
1. Shuffle read fetch edge processing or a one-time fetch finish again processing?
Edge fetch edge processing.
Shuffle stage is the side fetch side uses combine () to handle, just combine () processing is partial data. In order for the records to enter reduce (), MapReduce must wait until all the data is shuffle-sort before starting reduce ().
Because Spark does not require shuffle data to be globally ordered, it is not necessary to wait until all the data shuffle is complete before processing.
Use data structures that can be aggregate, such as HashMap. Each shuffle gets (deserialize out of the buffered filesegment) a \<key, value\= "" > Record, which is placed directly into the HASHMAP. If the HashMap already exists the corresponding Key, then the direct aggregate func(hashMap.get(Key), Value)
that is, such as the above WordCount example of the Func is hashMap.get(Key) + Value
, and the result of the Func re-put (Key) to HashMap.
2.
The difference between Spark and MapReduce