Basic how Spark works
1. Distributed
2, mainly based on memory (few cases disk-based)
3. Iterative calculation
The RDD and its features
1. RDD is the core abstraction provided by Spark, all known as the Resillient distributed dataset, or elastic distributed data set.
2. The RDD is a collection of elements that contain data in abstract terms. It is partitioned, divided into partitions, where each partition is distributed across different nodes in the cluster, allowing the data in the RDD to be manipulated in parallel. (Distributed data Set)
3. Rdd is usually created by a file on Hadoop, either an HDFs file or a hive table, or sometimes through a collection in the application.
4. The most important feature of RDD is that it provides fault tolerance and can automatically recover from node failure. That is, if the RDD on a node partition, because the node fails, causing the data to be lost, the RDD automatically recalculates the partition from its own data source. All this is transparent to the user.
5. The RDD data is stored in memory by default, but when memory resources are low, Spark automatically writes the RDD data to disk. Elastic
Spark Basic working principle and RDD