Liaoliang on Spark performance optimization sixth quarter

Source: Internet
Author: User
Tags shuffle

Content:

1, the shuffle on the re-description;

2, shuffle performance optimization;

The core ability is to be able to easily harness shuffle, hash, Sort, tungsten wire plan, to deal with different scenarios, Spark1.6.0 is the default sortbasedshuffle

From the more general aspects of the shuffle, has been from the hash and sort two point of view on the shuffle

==========shuffle Performance Tuning ============

1.Questions: Shuffle output file lost? The real reason isGCcaused by!!! If GCespeciallyFull GC generation usually causes the thread to stop working, this time the next Stageof theTask retries are attempted to fetch data by default, and is typically retried 3Times for each retry are5s, which means by default 351If the data cannot be captured, it will appear Shuffle output file lostand, in turn, lead toTaskretries can even lead to StageRetry, the most serious of which is to cause AppThe first step in this time is to use efficient memory data structures and serialization mechanisms, JVMthe tuning to reduceFull GCof the production;

2, in the shuffle, the reducer end to obtain data will have a specified size of the cache space, if the memory is sufficient to achieve the situation, you can appropriately increase the cache space, otherwise it will spill to disk, affecting efficiency.

At this time can adjust (increase) The spark.reducer.maxSizeInFlight parameter, if enough memory, can be adjusted to more than 128MB, the default is 48MB;

3, on the shufflemaptask side will usually increase the map task of the write disk cache, by default is 32K;

Spark.shuffle.file.buffer 32k;

4, adjust the number of retries to get shuffle data, the default is 3 times, it is generally recommended to increase the number of retries;

Adjust the time interval to get shuffle data retry, the default 5s, strongly recommended to increase the time, spark.shuffle.io.retryWait 5s;

5, in the reducer side do Aggregation , the default is 20% memory used to do Aggregation, if beyond this size will overflow to disk, it is recommended to increase the percentage to improve performance;

650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/>

Liaoliang Teacher's card:

China Spark first person

Sina Weibo: Http://weibo.com/ilovepains

Public Number: Dt_spark

Blog: http://blog.sina.com.cn/ilovepains

Mobile: 18610086859

qq:1740415547

Email: [Email protected]


This article from "a Flower proud Cold" blog, declined reprint!

Liaoliang on Spark performance optimization sixth quarter

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.