spark--Data Partitioning (Advanced)

Source: Internet
Author: User
Tags join

Controlling the partitioning of datasets between nodes is one of the features of Spark. Communication in a distributed program is expensive, and a single-node program needs to choose the right data structure for a collection of records, and the spark program can reduce the communication overhead by controlling the Rdd partitioning method. Partitioning is helpful only if the dataset is used more than once in a key-based operation such as connecting. If the RDD only needs to be scanned once, no partition processing is necessary.
I. How to get the Rdd partition
in Scala and Java, you can use the Partitioner property of the Rdd, using the Partitioner () method in Java, He will return a Scala.option object, which is the container class used in Scala to hold potentially existing objects. Call isdefined on this option to check if there is a value, call get to get the value, and if there is a value, this value is a spark. The Partitioner object.
Instance:

scala> val pairs=sc.parallelize (List ((All), (2,2), (3,3))) Pairs: org.apache.spark.rdd.rdd[(int, int)] = parallelcollectionrdd[0] at parallelize at <console>:24 scala> Pairs.partitioner Res0:option[org.apache.spark.partitioner] = None ^ scala> Import Org.apache.spark Import Org.apache.spark scala> val Partitioned=pairs.partitionby (New Spark. Hashpartitioner (2)) partitioned:org.apache.spark.rdd.rdd[(int, int)] = shuffledrdd[1] at Partitionby at <console >:27 scala> partitioned.partitioner Res1:option[org.apache.spark.partitioner] = Some ( org.apache.spark.hashpartitioner@2) 

The

can call persist () on the fourth input, otherwise the subsequent RDD operation will partitioned the entire pedigree of the new job, which results in a hash partitioning operation for paids over and over.
Ii. operations benefiting from partitioning
the operations that can be obtained from the spark partition are: Cogroup (), Groupwith (), join (), Leftouterjoin (), Rightouterjoin (), Groupbykey () , Reducebykey (), Combinebykey (), and lookup ().
Iii. actions that affect the partitioning method
Spark internally knows how each operation affects the partitioning method, and the result of the operation that will partition the data the RDD automatically sets the corresponding partitioner. The result of the conversion operation is not necessarily partitioned by the known partitioning method, when the output RDD may not have a partition set. The
causes the resulting rdd to be partitioned in the following way: Cogroup (), Groupwith (), join (), Leftouterjoin (), Rightouterjoin (), Groupbykey (), Reducebykey (), Combinebykey (), Partitionerby (), sort (), mapvalues () (If the parent Rdd is partitioned), flatmapvalues (if the parent Rdd is partitioned), and filter () ( If the parent Rdd has a partitioning method). All other operations produce results that do not have a specific partitioning method.
Four, custom partitioning method
Implement a custom partitioner, you need to inherit the Org.apache.spark.Partitioner class and implement the following three methods:
1, Numpartitions:int: Returns the number of partitions created
2, Getpartition (): int: Returns the partition number of the given key (0 to NumPartitions-1), ensuring that a non-negative is always returned;
3, Equals (): Java standard method of judging equality, this method is the absorption is very important, spark needs to use this method to check whether your partition object is the same as other partition instances, so that spark can determine whether the two Rdd is the same partition method.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.