RDD underlying implementation principle
RDD is a distributed data set. As the name implies, its data should be stored on multiple machines. In fact, the data of each RDD is stored on multiple machines in the form of Block. The following figure is the RDD storage architecture diagram of Spark. Each Executor will start a BlockManagerSlave and manage a part of Block; and the metadata of Block is The BlockManagerMaster of the Driver node is saved. BlockManagerSlave generates a block and registers it with BlockManagerMaster. BlockManagerMaster manages the relationship between RDD and Block. When RDD no longer needs to store, it will send an instruction to BlockManagerSlave to delete the corresponding Block.
Principle of
RDD cache
During the conversion of RDD, not every RDD will be stored. If an RDD will be reused or its cost is very high, you can store the RDD by explicitly calling the cache() method provided by the RDD. How is the RDD cache implemented?
The cache() method provided in RDD simply puts the RDD in the cache list. When the iterator of the RDD is called, the RDD is calculated through the CacheManager and stored in the BlockManager. The next time the data of the RDD is obtained, it can be directly read from the BlockManager through the CacheManager.
RDD dependency and DAG
RDD provides many conversion operations, and each conversion operation generates a new RDD. This is because the new RDD depends on the original RDD. The dependency between these RDDs eventually forms a DAG (Directed Acyclic Graph).
There are two types of dependencies between RDDs, namely NarrowDependency and ShuffleDependency, where ShuffleDependency is a sub-RDD and each Partition depends on all the Partitions of the parent RDD, while NarrowDependency only depends on one or part of the Partitions. The groupBy and join operations in the following figure are ShuffleDependency, and map and union are NarrowDependency.
RDD partitioner and parallelism
Each RDD has a Partitioner attribute, which determines how the RDD is partitioned. Of course, the number of Partitions will also determine the number of Tasks for each Stage. Currently, Spark needs to set the number of Stage parallel tasks (configuration item: spark.default.parallelism). If it is not set, the child RDD will be determined according to the partition of the parent RDD, such as the partition of the child RDD and the parent Partition It is exactly the same. The number of partitions of the child RDD during Union operation is the sum of the number of parent partitions.
How to set spark.default.parallelism is a challenge for users, it will largely determine the performance of Spark programs.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.