The cache of the RDD
One of the reasons that spark is fast is to persist (or cache) a dataset in memory in different operations. When an rdd is persisted, each node will store the computed shard results in memory and reuse them in other actions (action) for this dataset (or derived datasets). This allows subsequent movements to become faster (usually 10 times times faster). RDD-related persistence and caching is one of the most important features of spark. It can be said that caching is the key to spark building iterative algorithms and fast interactive queries.
A persist () or cache () method can be used to mark an rdd to be persisted, and once triggered, the RDD will be retained in the compute node's memory and reused. In fact, the cache () is a quick way to use persist (), and they are implemented as follows:
/** Persist This RDD with the default storage level (' memory_only '). */def persist (): This.type = persist (storagelevel.memory_only)/** persist this RDD and the default storage level (' MEMORY _only '). */def cache (): This.type = persist ()
In Figure 4, assuming that the RDD0→RDD1→RDD2 calculation is performed first, the RDD1 is already cached in the system at the end of the calculation. In the RDD0→RDD1→RDD3 calculation job, because RDD1 has been cached in the system, so the RDD0→RDD1 conversion will not be repeated, the calculation of the job only need to do RDD1→RDD3 calculation, so the calculation speed can be greatly improved.
The cache may be lost, or the data stored in memory is deleted due to insufficient memory. The cache-tolerant mechanism of the RDD ensures that calculations are performed correctly even if the cache is lost. With an RDD-based series of conversions, the lost data is re-counted. The various partition of the RDD are relatively independent, so it is only necessary to calculate the missing parts and not to recalculate all partition.
Apache Spark Rdd's RDD cache