Hadoop job is a solution to data skew when large data volumes are associated

Source: Internet
Author: User

Bytes/

Data skew refers to map/reduceProgramDuring execution, most reduce nodes are executed, but one or more reduce nodes run slowly, resulting in a long processing time for the entire program, this is because the number of keys of a key is much greater than that of other keys (sometimes hundreds of times or thousands of times). The reduce node where the key is located processes a much larger amount of data than other nodes, as a result, several nodes are delayed.

When you use a hadoop program for Data Association, data skew is often encountered. Here is a solution.

(1) set a hash n to scatter keys with a large number of entries.

(2) process the data with multiple duplicate keys: add the number from 1 to n after the key as the new key. If you need to associate it with another data, you need to rewrite the comparison and distribution classes (for example, in the previous article "a method for hadoop job to solve the association of large data volumes"). In this way, multiple keys are evenly distributed.

Int inum = inum % ihashnum;

String strkey = Key + ctrlc + String. valueof (inum) + ctrlb + "B ";

(3) After the previous step, keys are evenly distributed to many different reduce nodes. If you need to associate with other data, to ensure that each reduce node has an associated key, process the data of another single key: from 1 to n in a loop, add the number after the key as the new key.

For (INT I = 0; I <ihashnum; ++ I ){

String strkey = Key + ctrlc + String. valueof (I );

Output. Collect (new text (strkey), new text (strvalues ));}

This solves the problem of data skew and greatly reduces the running time of the program. However, this method will multiply the data volume of one of the copies at the cost of increasing the shuffle data volume. Therefore, when using this method, you need to perform multiple experiments to obtain an optimal hash value.

==============================================

Although data skew can be solved by using the above method, when the associated data volume is large, if a certain amount of data is increased exponentially, the data volume of reduce shuffle will become huge, which is not worth the candle, therefore, the slow running time cannot be solved.

There is a new way to solve the defect of exponentially increasing data:

Find commonalities between the two data sets. For example, in addition to the associated fields, the two data sets also have fields of the same meaning. If the repetition rate of this field in all logs is small, this field can be used to calculate the hash value. If it is a number, it can be used to model the number of hash parts, if it is a character, hashcode can be used to model the number of hash parts (of course, to avoid excessive data falling into the same reduce, hashcode can also be used ), in this way, if the value distribution of this field is average enough, the above problem can be solved.-

 

The second method is hard to grasp and is not very common.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.