Data storage for Spark

Source: Internet
Author: User

  the core of the Spark data store is the elastic distributed Data Set (RDD). The Rdd can be abstracted as a large array, but the array is distributed over the cluster. logically each partition of the RDD is called a
Partition.
During the execution of Spark, the RDD undergoes a transfomation operator and is finally triggered by an action operator. Each time a transformation is logically experienced, the RDD is converted into a new rdd,rdd between the lineage, which has a very important role in fault tolerance. Both the input and output of the transformation are RDD. The RDD is divided into a number of partitions that are distributed across multiple nodes in the cluster. Partitioning is a logical concept, and the old and new partitions before and after the transformation are physically likely to be stored in the same piece of memory. This is an important optimization to prevent the unlimited expansion of memory requirements caused by functional data invariance (immutable). Some rdd is the intermediate result of the calculation, and its partition does not necessarily have corresponding memory or disk data corresponding to it, if you want to iterate over the use of data, the cache () function can be cached data.

Figure 1 The RDD data management model

The rdd_1 in Figure 1 contains 5 partitions (P1, p2, p3, P4, p5), respectively, stored in 4 nodes (Node1, Node2, Node3, Node4). The rdd_2 contains 3 partitions (P1, p2, p3), distributed in 3 nodes (Node1, Node2, Node3).
In physics, the Rdd object is essentially a metadata structure that stores the mappings of blocks, node, and other metadata information. An RDD is a component area, and on the physical data store, each partition of the RDD corresponds to a block,block that can be stored in memory and stored on disk when there is not enough memory.
Each block stores a subset of all data items in the RDD, exposing the user to an iterator that can be a block (for example, the user can get a partitioned iterator through mappartitions), or it can be a data item (for example, Computes each data item in parallel using the map function. This book provides a detailed introduction to the underlying implementation of data management in later chapters.
If you are using external storage such as HDFS as the input data source, data is partitioned according to the data distribution policy in HDFs, and one block in HDFs corresponds to one partition of spark. At the same time, Spark supports repartitioning, which determines which nodes the data block is distributed through by Spark's default or user-defined partitioner. For example, partitioning strategies such as hash partitioning (hash values by key value of data items, elements with the same hash value in the same partition) and range partitions (data that belong to the same data range are placed in the same partition) are supported.

Data storage for Spark

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.