The RDD is the most basic and fundamental data abstraction for spark, which has the fault tolerance of data flow models like MapReduce, and allows developers to perform memory-based computations on large clusters.
To effectively implement fault tolerance, the RDD (see http://www.cnblogs.com/zlslch/p/5718799.html) provides a highly restricted shared memory that the RDD is read-only and can only be created through bulk operations on other rdd.
The RDD only supports coarse-grained conversions, limiting the programming model.
But Rdd can still work well for many applications, especially for batch analytics applications that support data parallelism, including data mining, machine learning, graph algorithms, and so on, because these programs typically perform the same operations on many records.
RDD is not suitable for applications that update shared state asynchronously, such as a parallel web crawler.
As a result, the goal of Spark is to provide an effective programming model for most analytical applications, while other types of applications are given to the systems of specialists.
Apache Spark Rdd First Talk 2