Apache Flink Fault Tolerance Source Analysis (iv)

Source: Internet
Author: User
Tags instance method savepoint apache flink

In the previous article we explored the role of zookeeper in Flink fault tolerance (storing/recovering completed checkpoints and checkpoint number generators).

This article will talk about a special checkpoint that Flink named--savepoint (savepoint).

Because the savepoint is just a special checkpoint, there is not much code implementation in Flink. But as a feature, it's worth a piece of space to introduce.

Checkpoint vs Save Point

Programs written using the data Flow API can resume execution from the savepoint . The save point allows you to update the program while also ensuring that the Flink cluster does not lose any state.

A savepoint is a manually triggered checkpoint that snapshots the application and saves the snapshot to the persisted store ( state backend ). The savepoint relies on a regular checkpoint mechanism, and during program execution, Flink periodically executes the snapshot on the worker node and generates checkpoints. The recovery mechanism requires only the latest completed checkpoints, and once a new checkpoint is completed, the old checkpoint can be safely discarded.

The savepoint is similar to the periodic checkpoints. There are two of different points:

    • They are triggered by the user
    • Does not automatically expire when a new completed checkpoint is created

is an illustration of the difference between the two. In the example above, job 0xA312Bc a checkpoint was generated, c1 c2 c3 and c4 . Periodic checkpoints c1 and have been c2 discarded, c4 are the latest checkpoints. In c2 particular, its state is associated with a savepoint s1 , which has been triggered by the user and does not expire automatically (visible in the diagram c1 and c3 automatically expires after a new checkpoint has been generated).

It is important to note that it s1 is just a pointer that points to c2 the checkpoint. This means that the true state will not be copied to the savepoint, but the state of the associated checkpoint will be saved.

Trigger mechanism of the save point

Above, we say one of the notable differences between a savepoint and a checkpoint is that the savepoint is triggered by the user itself . So what is the user triggering by what means? The answer is the command-line client provided by Flink.

The Flink has a separate client module flink-clients . The class where the triggering code resides is located under the module CliFrontend :

Org.apache.flink.client.CliFrontend

The code is in triggerSavepoint the method:

ActorGateway jobManager = getJobManagerGateway(options);logAndSysout("Triggering savepoint for job "".");Future<Object> response = jobManager.ask(new TriggerSavepoint(jobId),new FiniteDuration(1, TimeUnit.HOURS));

Based on the Akka actor message-driven mechanism, the client jobManager sends a TriggerSavepoint message. To drive the jobManager response to trigger the save point request.

Flink defines a series of messages that interact with the client:

Org.apache.flink.runtime.messages.JobManagerMessages

  /** * Triggers A savepoint for the specified job. * * This was not a subtype of [[Abstractcheckpointmessage]], because it was a * control-flow message, which is *not* p    Art of the checkpointing mechanism * of triggering and acknowledging checkpoints.    * * @param jobId The jobId of the job to trigger the savepoint for. */  Case   class triggersavepoint(jobid:jobid) extends  Requiresleadersessionid   /** * Response after a successful savepoint trigger containing the savepoint path.    * * @param jobId the job ID for which the savepoint is triggered.    * @param Savepointpath The path of the savepoint. */  Case   class triggersavepointsuccess(jobid:jobid, savepointpath:string)   /** * Response after a failed savepoint trigger containing the failure cause.    * * @param jobId the job ID for which the savepoint is triggered.    * @param cause the cause of the failure. */  Case   class triggersavepointfailure(jobid:jobid, cause:throwable)   /** * Disposes a savepoint.    * * @param Savepointpath The path of the savepoint to dispose. */  Case   class disposesavepoint(savepointpath:string) extends Requiresleadersessionid   /** Response After a successful savepoint dispose. * /   Case  Object disposesavepointsuccess   /** * Response after a failed savepoint Dispose containing the failure cause.    * * @param cause the cause of the failure. */  Case   class disposesavepointfailure(cause:throwable) 

So JobManager how does it respond to TriggerSavepoint the message?

Future {Try{//Do ThisAsync, because checkpoint coordinator operations can//Contain blocking calls to the State backendorZooKeeper. Val savepointfuture = Savepointcoordinator.triggersavepoint (System.currenttimemillis ()) s Avepointfuture.oncomplete {//Success, respond withThe savepoint Path CaseScala.util.Success (savepointpath) =Senderref! Triggersavepointsuccess (JobId, Savepointpath)//Failure, respond withThe cause CaseScala.util.Failure (t) =Senderref! Triggersavepointfailure (JobId,NewException ("Failed to complete savepoint", T)}} (Context.dispatcher)}Catch{ Case e: Exception=Senderref! Triggersavepointfailure (JobId,NewException ("Failed to trigger SavePoint", e))}} (Context.dispatcher)

Visible from the code, it calls the SavepointCoordinator#triggerSavepoint method to complete the logic that triggers the savepoint, returns an Future object, and registers one for it callback . After the triggered checkpoint transitions to the completed checkpoint, the callback call is triggered, and if successful, the client replies to the TriggerSavepointSuccess message.

The logic that triggers the savepoint is implemented in the class SavepointCoordinator . We talked about it when we analyzed the checkpoint trigger mechanism CheckpointCoordinator . SavepointCoordinatoris CheckpointCoordinator the child class.

In SavepointCoordinator triggerSavepoint , its specific trigger logic also indirectly invokes the parent class CheckpointCoordinator 's instance method triggerCheckpoint :

try {    // All good. The future will be completed as soon as the    // triggered checkpoint is done.    success = triggerCheckpoint(timestamp, checkpointId);}finally {    if (!success) {        savepointPromises.remove(checkpointId);        promise.failure(newException("Failed to trigger savepoint"));    }}

It is important to note that CheckpointCoordinator the result is a checkpoint that is triggerCheckpoint PendingCheckpoint not completed. At this point, the savepoint does not establish a relationship with the current checkpoint (because it PendingCheckpoint does not necessarily translate successfully into a CompletedCheckpoint corresponding relationship at this time) until the checkpoint becomes a completed checkpoint.

A callback is triggered when a checkpoint becomes a completed checkpoint, which is the time at which the CompletedCheckpoint onFullyAcknowledgedCheckpoint SavePoint establishes a relationship with the checkpoint:

    protected void Onfullyacknowledgedcheckpoint(Completedcheckpoint checkpoint) {//Sanity checkpromise<string> Promise = Checknotnull (savepointpromises. Remove (Checkpoint.getcheckpointid ()));//Sanity check        if(Promise.iscompleted ()) {Throw NewIllegalStateException ("SavePoint Promise Completed"); }Try{//Save the checkpointString Savepointpath = savepointstore.putstate (checkpoint);        Promise.success (Savepointpath); }Catch(Exception e) {Log.warn (" Failed to store savepoint.", e);        Promise.failure (e); }    }

It is in the call of the

promise.success(savepointPath);

The JobManager client message (that is, the trigger callback) is not actually returned savepointFuture.onComplete .

At the same time, from the code snippet above, we also see how the savepoint is related to the checkpoint. It is savepointStore , that is, the pointer mentioned earlier. savepointStoretype is StateStore , this is what we're going to analyze below--access to the state of the savepoint.

Save Point Status

An interface is provided in Flink: StateStore to support the access of the savepoint state. It provides a way to access the state of the savepoint externally:

    • Putstate
    • GetState
    • Disposestate

Regardless of what the final storage media is, they are access based on logical path (logic paths ).

Currently, there are three implementations of this interface:

    • Filesystemstatestore: File system-based state storage
    • Heapstatestore: State storage based on Java heap memory
    • Savepointstore: State access to the savepoint is the implementation of the adorner pattern and the generic type is materialized CompletedCheckpoint .

The first two interfaces in these three interfaces really are based on the implementation of the State preservation mechanism of different storage media. And they correspond to two kinds of storage mechanisms in Flink: Correspondence FileSystemStateStore filesystem ; HeapStateStore jobmanager

JobManager

This is the default implementation of the save point mechanism. The save point is stored in job manager the heap memory. They are job manager lost when they are closed. This mode is only useful if you are shutting down and recovering the program while running in the same cluster. It is not recommended to use this mode in a production environment. And this mode, the savepoint is not job manager part of the high-availability guarantee.

The configuration is as follows:

savepoints.state.backendjobmanager
File system

The savepoint is stored in the file system configuration-based folder. They are visible to the instances of each node of the cluster, and allow your program to migrate between different clusters.

Configuration:

savepoints.state.backendfilesystem savepoints.state.backend.fs.dirhdfs:///flink/savepoints

It is important to note that a savepoint is a pointer to a completed checkpoint. That means that the state of the savepoint not only refers to the content stored in the SavePoint file itself, but also contains the checkpoint data (which may be stored in another collection of files). Therefore, if you use persistence as a savepoint as a filesystem jobmanager checkpoint, Flink will not be implemented in this case fault tolerance because the job manager checkpoint data will not be accessible after the reboot. Therefore, it is best to ensure the consistency of two mechanisms.

Flink SavepointStoreFactory#createFromConfig creates a specific implementation by combining the configuration file StateStore .

Summary

In this paper, we mainly focus on the preservation point of Flink, analyze the relationship and difference between the savepoint and checkpoint, and combine the code to analyze the triggering mechanism of the savepoint and the storage of the state of the savepoint.

Scan code Attention public number: Apache_flink

QQ Scan Code concern QQ Group: Apache Flink Learning Exchange Group (123414680)

Apache Flink Fault Tolerance Source Analysis (iv)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.