First, what is shuffle?
Shuffle Chinese translation as "Shuffle", the key reason to need Shuffle is that some kind of data with common characteristics need to converge to compute node at last.
Second, Shuffle problems that may be faced?
1, The amount of data is very large;
2, How to classify data, that is, how to Partition,Hash,Sort, tungsten filament calculation;
3, load balancing (data skew);
4, The network transmission efficiency, needs to make the tradeoff between the compression and the compression, the serialization and the inverse sequence also must consider the question;
Description: When the specific task is calculated, it is possible to make the data have the characteristics of the Process Locality ; the fallback is to increase the data fragmentation and reduce each task The amount of data processed.
Third, Hash Shuffle
1, key cannot be an Array;
2, Hash Shuffle does not need to be sorted, and in theory saves the Shuffle in Hadoop MapReduce Need to sort the time wasted, because the actual production environment has a large number of Shuffle types that do not need to be sorted ;
Is it necessarily more than the sort you need sorted Shuffle faster? Not necessarily! If the data scale is larger than the case, hash Shuffle sorted Shuffle fast (many)! However, if the amount of data is large, sorted Shuffle hash Shuffle fast (many)
3 shufflemaptask key key partition generated r stage parallelism) files if the current stage with M shufflemaptask Span style= "FONT-FAMILY:CALIBRI;" >m*r Files!!!
Note: TheShuffle operation is mostly over the network, and if Mapper and Reducer are on the same machine, only the local Disk.
Hash Shuffle Two big dead points: first:Shuffle before the large amount of small files on the disk, this time will produce a lot of time-consuming and inefficient IO operations; second: memory is not shared!!! Because the memory needs to save a large amount of file operation handle and temporary cache information, if the data processing scale is huge, memory is not bearable, there are problems such as OOM!
Iv.Sorted Shuffle:
In order to improve the above problem (while opening too many files causesWriter HandlerExcessively high memory usage and excessive file generation result in extremely inefficient disks due to a large number of random reads and writesIOoperation),Sparklater launched theconsalidatemechanism to merge small files, at which pointShufflethe number of files produced isCores*r, forShufflemaptasksignificantly more than the concurrently available parallelcoresthe number of cases,ShuffleThe resulting file will be drastically reduced and will greatly reduceOOMthe possible;
Spark launched open frame for easy system upgrade when customizing shuffle function module, It is also convenient for third-party system retrofit personnel to open the specific best shuffle module; Core interface shufflemanager hashshufflemanager spark 1.6.1
650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M02/7F/AC/wKioL1cojRSjcHpLAACQLEUMdoA731.png "title=" Qq20160503193158.png "alt=" Wkiol1cojrsjchplaacqleumdoa731.png "/>
Note:
Sina Weibo: Http://www.weibo.com/ilovepains
Public Number: Dt_spark
This article is from the "onepeople" blog, make sure to keep this source http://5233240.blog.51cto.com/5223240/1769821
Spark Shuffle Insider decryption (24)