Spark Shuffle Intrinsic principle description

Source: Internet
Author: User
Tags shuffle

In the MapReduce framework, shuffle is the bridge between the map and the reduce, and the output of the map must pass through the shuffle in the reduce, and the performance and throughput of the shuffle are directly affected by the performances of the whole program. As an implementation of the MapReduce framework, spark naturally implements the logic of shuffle.

Shuffle

Shuffle is a specific phase in the MapReduce framework, between the map phase and the reduce phase, and when the output of the map is to be used by reduce, the output needs to be hashed by key and distributed to each reducer. This process is shuffle. Because the shuffle involves the disk read-write and the network transmission, therefore the shuffle performance directly affects the entire program the operation efficiency.

The following diagram clearly describes the entire process of the mapreduce algorithm, where shuffle phase is between the map phase and the reduce phase .

Conceptually shuffle is a bridge that communicates data connections, so how does this part of shuffle (partition) actually be implemented, and let's take spark as an example of shuffle implementation in spark.

Spark Shuffle Evolutionary history

Let's briefly describe the entire process of shuffle in spark with the diagram as an example:

  • First of alleach mapper will create a corresponding number of bucket,bucket according to the number of reducerMxR MXR, where mM is the number of maps andRR is the number of reduce .
  • Second mapper results are populated in each bucket according to the partition algorithm set. The partition algorithm here can be customized, but the default algorithm is based on the key hash to a different bucket.
  • When Reducer is started, it takes the bucket as input from the remote or local block manager based on the ID of its task and the ID of the mapper it depends on.

Here the bucket is an abstract concept, in the implementation of each bucket can correspond to a file, can correspond to a part of the file or other.

Apache Spark's Shuffle process is similar to Apache Hadoop's Shuffle process, some concepts can be applied directly, for example, in the Shuffle process, to provide one end of the data, known as the map side, the map end of each generation of data task is called Ma Pper, corresponding to the receiving end of the data, known as the reduce end, the reduce end of each pull data is called Reducer, theShuffle process is essentially the data obtained from the MAP end using the partition, and the data sent to the corresponding Reducer The process .

Reference:

http://jerryshao.me/architecture/2014/01/04/spark-shuffle-detail-investigation/

Https://ihainan.gitbooks.io/spark-source-code/content/section3/index.html

Spark Shuffle Intrinsic principle description

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.