Spark Source Learning (12)---checkpoint mechanism analysis __spark source code Analysis

Source: Internet
Author: User

Checkpoint principle:

The CacheManager source analysis article mentions that when Rdd uses the cache mechanism to read data from memory, the checkpoint mechanism is used to read data if the data is not read. At this point, if there is no checkpoint mechanism, then you need to find the parent Rdd recalculation of the data, so checkpoint is a very important fault-tolerant mechanism. Checkpoint is for a RDD chain, if some intermediate results rdd, the subsequent need to reuse the data, may be due to some of the fault caused by the intermediate data loss, then you can start RDD mechanism for the checkpoint, checkpoint, First you need to call the Sparkcontext Setcheckpoint method, set up a fault tolerant file system directory, such as HDFs, and then call checkpoint method on Rdd. After the job runs Rdd, a separate job will be started, in which the file system that was set before the checkpoint data was written is persisted and made highly available. Therefore, when the RDD is used, the subsequent calculation does not need to be recalculated if the data is lost, but it can still be read from its checkpoint.


The difference between checkpoint and persist or cache is that persistence simply keeps the data in Blockmanager but its lineage is immutable, but checkpoint has not relied on rdd after execution, After only one checkpointrdd,checkpoint, Rdd's lineage changed. Also, persistent data loss is more likely because disk or memory may be cleaned up, but checkpoint data is usually saved to HDFs and placed in a highly fault-tolerant file system.


The checkpoint analysis starts with RDD reading and normally invokes the CacheManager method to obtain persisted data, but when Rdd cannot read from a persisted disk or cache, it looks for checkpoint to read the data:

  Final Def iterator (Split:partition, Context:taskcontext): iterator[t] = {
    if (Storagelevel!=) {Storagelevel.none 2/>sparkenv.get.cachemanager.getorcompute (this, split, context, Storagelevel)
    } else {
      Computeorreadcheckpoint (split, context)
    }
  }

Computeorreadcheckpoint first judge whether checkpoint, if there is, then call the parent Rdd iterator method, because at this time Rdd has no lineage, his father Rdd is Checkpointrdd. Otherwise recalculate, enter the Computeorreadcheckpoint method:

  Private[spark] def computeorreadcheckpoint (Split:partition, Context:taskcontext): iterator[t] =
  {
    if ( ischeckpointed) Firstparent[t].iterator (split, context) else compute (split, context)
  }
This is where the parent rdd, the Checkpointrdd compute method, is called to read data from Hadoop:

  Override Def compute (split:partition, Context:taskcontext): iterator[t] = {
    val file = new Path (Checkpointpath, CHEC Kpointrdd.splitidtofile (Split.index))
    checkpointrdd.readfromfile (file, broadcastedconf, context)
  }

Finally, use the FileSystem class of Hadoop to read the data on the HDFs:

  def Readfromfile[t] (
      path:path,
      Broadcastedconf:broadcast[serializablewritable[configuration]],
      Context:taskcontext
    ): iterator[t] = {
    val env = Sparkenv.get
    val fs = Path.getfilesystem ( BroadcastedConf.value.value)
    val buffersize = env.conf.getInt ("Spark.buffer.size", 65536)
    // Invoke the filesystem of Hadoop to open the input stream
    val fileinputstream = fs.open (path, buffersize)
    //Read Data
    val serializer = Env.serializer.newInstance ()
    val deserializestream = Serializer.deserializestream (FileInputStream)

    // Register an on-task-completion callback the input stream.
    Context.addtaskcompletionlistener (Context => deserializestream.close ())

    Deserializestream.asiterator.asinstanceof[iterator[t]]
  }





Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.