The RDD (resilient distributed DataSet) elastic distributed data set is the core data structure of spark.
DSM (distributed shared memory) is a common memory data abstraction. In DSM, applications can read and write to any location in the global address space.
The main difference between RDD and DSM is that not only can the RDD be created by bulk conversion (i.e. "write"), but it can also be written to any memory location. Rdd restricts the application to perform bulk write operations, which facilitates effective fault tolerance. In particular, because Rdd can use lineage (descent) to recover a partition, there is basically no checkpoint overhead. Failure only requires recalculation of the lost RDD partitions, which can be executed in parallel on different nodes without the need to roll back the entire program.
The RDD model has two advantages over DSM. First, for bulk operations in the RDD, the runtime Dispatches tasks based on where the data resides, improving performance. Second face with scan type operation, if memory is not sufficient to cache the entire RDD, a partial cache is made to store partitions that are not fit for memory on disk.
The RDD also supports coarse-grained and fine-grained read operations. Many of the function operations on the RDD, such as count and collect, are bulk read operations that scan the entire data set and can assign tasks to the node closest to the data. Also, the RDD supports fine-grained operations that perform keyword lookups on the rdd of a hash or range partition.