Operation of the Apache Spark Rdd Rdd

Source: Internet
Author: User
Tags shuffle spark rdd

Operation of the RDD

The RDD supports two types of operations: transformations and actions.

1) transform, that is, create a new dataset from an existing data set.

2) Action, that is, after the calculation on the data set, return a value to the driver program.

For example, a map is a transformation that passes each element of a dataset to a function and returns a new distributed dataset that represents the result. In another aspect, reduce is an action that overlays all elements with some functions and returns the final result to driver (and a parallel reducebykey that returns a distributed dataset).

Describes the creation of an RDD from an external data source, after several transformations, to write the results back to the logical run diagram of the external storage system through an action action. The calculation of the entire process is run in the executor in the worker.

Figure 1 schematic diagram of the creation, transformation, and operation of the RDD

The conversion of the RDD

All transitions in the rdd are inert, that is, they do not directly evaluate the result. Instead, they just remember the transition actions that apply to the underlying dataset (such as a file). These conversions will only actually run if a request is taken to return the result to driver. This design allows spark to run more efficiently. For example, we can implement: a new dataset created from map and used in reduce, and ultimately only the result of reduce is returned to driver, not the entire large new dataset. Figure 2 depicts the implementation logic diagram of the internal rdd conversion of the RDD when the Groupbyrey is in progress. Figure 3 depicts the implementation logic diagram of the Reducebykey.

Figure 2 The logical conversion diagram of the RDD Groupbykey

In the Groupbykey operation, the Mappartitionsrdd is done once shuffle, the number of shards set in Figure 2 is 3, so Shuffledrdd will have 3 shards, Shuffledrdd actually reads the results of shuffle from the upstream task, so the arrows of the graph point to the upstream Mappartitionsrdd. The implementation of shuffle is actually much more complex than the diagram shows. The implementation of Reducebykey and Groupbykey is similar, and it needs to be made once the shuffle is complete.

Figure 3 The logical conversion diagram of the RDD Reducebykey

By default, each converted Rdd is recalculated when it performs an action. However, you can also use the persist (or cache) method to persist an RDD in memory. In this case, spark will save the relevant element in the cluster and will be able to access it more quickly the next time the RDD is queried. It also supports persisting datasets on disk, or replicating datasets between clusters.

Operation of the Apache Spark Rdd Rdd

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.