Troubleshoot data skew problems in spark

Source: Internet
Author: User
Tags shuffle
I. The phenomenon of data skew

Most tasks perform faster, a few tasks take a long time to execute, or wait a long time to prompt you for insufficient memory and fail to execute. Two. Reasons for data skew

common to a variety of shuffle operations, such as Reducebykey,groupbykey,join. data problem key itself is unevenly distributed (including a large number of key null) key settings unreasonable spark usage issues Shuffle when the concurrency is not enough calculation method Three. Consequences of data skew the execution time of a stage in spark is limited to the last task that was executed, So a slow task slows down the entire program (the speed at which the distributed program runs is determined by the slowest task). Too much data is executed in the same task, and the executor will explode, causing Oom and the program to stop running.

An ideal Distributed program:

When data skew occurs, the execution speed of the task is determined by the largest task:
four. Data skew due to data problems

when you find that the data is tilted, do not rush to improve executor resources, modify parameters or modify the program, first check the data itself, whether there is abnormal data. Find out the exception key

If the task is stuck in the last 1 (several) tasks for a long time, the key must be sampled to determine which key is responsible.

Select key, sample the data, count the number of occurrences, and sort out the first few according to the number of occurrences

Df.select ("key"). Sample (false,0.1). (K=> (k,1)). Reducebykey (_+_). Map (k=> (k._2,k._1)). Sortbykey (False). Take (10)
1 2

Data skew occurs when most data distributions are found to be more evenly distributed, while individual data is a number of orders of magnitude larger than other data.

after analysis, the skewed data mainly has the following three kinds of situations: null (null value) or some meaningless information () and so on, mostly this cause. Invalid data, large amounts of repetitive test data, or effective data that has little impact on the results. Effective data, business-led distribution of normal data. Solutions

1th, 2 kinds of cases, directly filter the data.

The 3rd situation requires some special operations, and there are several common practices. Isolated execution, the exception key is filtered out and processed separately, and finally with the normal data processing results of the union operation. Add a random value to the key first, after the operation, the random value is removed, and then one operation. Use map join instead of Groupbykey using Reducebykey. Example:

If you use Reducebykey because data skew causes a failure to run. Here's how to do this: convert the original key to a key + random value (for example, Random.nextint) Reducebykey the data (func) to the key + random value to key and then Reducebykey (func)

TIP1: If there is still a problem at this point, it is recommended to filter out skewed data for processing separately. Finally, the data is union with the normal data.

TIPS2: When handling exception data separately, you can work with the map join. Five. Data skew due to improper use of spark 1. Improve shuffle parallelism

Dataframe and Sparksql can set the Spark.sql.shuffle.partitions parameter to control the concurrency of Shuffle, which defaults to 200.
The RDD operation can set the Spark.default.parallelism control concurrency, and the default parameters are controlled by different cluster manager.

limitations: Just let each task execute fewer different keys. There is no way to solve the skew caused by an exceptionally large number of individual keys, and if some key size is very large, even if a task executes it alone, it can be plagued by data skew. 2. Use map join instead of reduce join

In cases where the small table is not particularly large (depending on your executor size), you can make the program avoid the shuffle process, and naturally there is no data skew.

Limitations: Because small data is sent to each executor first, the amount of data cannot be too large.

Specific usage methods and process reference:

Spark Map-side-join Association Optimization

Spark join broadcast optimized copyright notice: This article is the original blogger article, not allowed to be reproduced without the Bo master. https://blog.csdn.net/lsshlsw/article/details/52025949 article tags:  spark data tilt Personal classification:  spark

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.