A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
Once a leader has been elected, it begins servicing client requests. Each client request contains a command to being executed by the replicated state machines. The leader appends the command to their log as a new entry, then issues appendentries RPCs in parallel Servers to replicate the entry. When the entry have been safely replicated (as described below), the leader applies the entry to their state machine and Retu RNs The result of this execution to the client. If followers crash or run slowly, or if network packets is lost, the leader retries Appendentries RPCs indefinitely (even After it had responded to the client) until all followers eventually store all log entries.
logs is organized as shown in Figure 6. Each log entry stores a state machine command along with the Term number when the entry is received by the leader. The term numbers in log entries is used to detect inconsistencies between logs and to ensure some of the properties In Figure 3. Each log entry also have an integer index identifying their position in the log.
The leader decides when it's safe to apply a log entry to the state machines; Such an entry is called committed. Raft guarantees that committed entries is Durable and would eventually be executed by all of the available state Machines. A log entry is committed once the Leader that created the entry have replicated it on a majority of the servers ( e.g., entry 7 in Figure 6). This also commits all preceding entries in the leader's log, including entries created by previous leaders. Section 5.4 discusses some subtleties when applying the rule after leader changes, and it also shows the this definition of commitment is safe. The leader keeps track of the highest index it knows to being committed, and it includes that index in future appendentries R PCs (including heartbeats) so, the other servers eventually find out. Once a follower learns that a-log entry is committed, it applies the entry-to-it local state machine (in log order).
We designed the Raft log mechanism to maintain a high level of coherency between the logs on different servers. Does this simplify the system's behavior and make it more predictable, but it's an important component of Ensuri NG Safety. Raft maintains the following properties, which together constitute the Log Matching property in Figure 3:
If the entries in different logs has the same index and term, then they store the same command.
If the entries in different logs has the same index and term, then the logs is identical in all preceding entries.
during normal operation, the logs of the leader and Followers stay consistent, so the appendentries consistency check never fails. However, leader crashes can leave the logs inconsistent (the old leader and not having fully replicated all of the entries I n its log). These inconsistencies can compound over a series of leader and follower crashes. Figure 7 illustrates the ways in which followers ' logs could differ from that of a new leader. A follower may is missing entries that is present on the leader and it may has extra entries that is not present on the LE Ader, or both. Missing and extraneous entries in a log may span multiple terms.
in Raft, the leader handles inconsistencies by forcing the Followers ' logs to duplicate its own. This means, conflicting entries in follower logs would be overwritten with entries from the leader ' s log. Section 5.4 would show that this is safe when coupled with one more restriction.
To bring a follower's log into consistency with its own, the leader must find the latest log entry where the and the logs Agree, delete any entries in the follower's log after then, and send the follower all of the leader's entries after That point. All of these actions happen in response to the consistency check performed by Appendentries RPCs. The leader maintains a nextindex for each follower, which is the index of the next log entry the leader would send to that Follower. When a leader first comes to power, it initializes all Nextindex values to the index just after the last one in its log (1 1 in Figure 7). If a follower ' s log is inconsistent with the leader ' s, the appendentries consistency check would fail in the next appendent Ries RPC. After a rejection, the leader decrements nextindex and retries the appendentries RPC. Eventually Nextindex would reach a point where the leader and follower logs match. When this happens, appendentries'll succeed, which removes any conflictinG entries in the follower's log and appends entries from the leader's log (if any). Once Appendentries succeeds, the follower ' s log is a consistent with the leader ' s, and it'll remain that-for the rest of the term.
if desired, the protocol can optimized to reduce the Number of rejected Appendentries RPCs. For Example, when rejecting a appendentries request, the follower can include the term of the conflicting entry and The first index it stores for. With this information, the leader can decrement Nextindex to bypass all of the conflicting entries in that term; One appendentries RPC would be required for each term with conflicting entries, rather than one RPC per entry. In practice, we doubt this optimization is necessary, since failures happen infrequently and it's unlikely that there wil L be many inconsistent entries.
With this mechanism, a leader-does not need-to-take any special the actions to restore log consistency when it comes to power. It just begins normal operation, and the logs automatically converge in response to failures of the appendentries consist Ency check. A leader never overwrites or deletes entries in it own log (the leader Append-only property in Figure 3).
This log replication mechanism exhibits the desirable consensus properties described on section 2:raft can accept, replic Ate, and apply new log entries as long as a majority of the servers is up; In the normal case a new entry can is replicated with a single round of RPCs to a majority of the cluster; And a single slow follower would not impact erformance.5.3 Log Replication
Once a leader is elected, the service client request is started. Each client request includes commands that are executed by the replication state machine. Leader adds the command as a new entry to its log, and then sends Appendentries RPC in parallel to the other server to copy the entry. When the entry is securely copied (as described below), leader sends an entry to the state machine and returns the result to the client. If the follower crashes or runs slowly, or if the network drops, Leaer will resend Appendentries RPC (even if it has responded to the client), until all follower have all the log entries stored.
Figure 6 shows the composition of the log. Each log entry contains a state machine command and a term value when leader is received. This term value is used in log entries to detect inconsistencies between logs and to ensure that some of the properties in Figure 3 are. Each log entry also has an integer index to mark the location in the log.
Leader decides when to send log entries to the state machine is safe; This entry is called commited. Raft promises that commited entries are durable and will eventually be executed by all available state machines. Once a log entry is created by leader and replicated on most servers, it is commited (entry 7 in 6). This will commit leader all the previous log entries, including the previous leader created. Section 5.4 Discusses some of the questions about the rules of Leader change, which also shows that the definition of commit is safe. Leader keeps track of the largest known committed index, which includes the index of the Appendentries RPC (which contains the heartbeat) that will eventually be found by other servers in the future. Follower once it is known that a log entry has been commited, it applies the entry to its own local state machine (in log order).
We designed the raft logging mechanism to maintain the high consistency of logs across different servers. This not only simplifies the system's behavior, but also makes him predictable, and it is an important part of ensuring security. Raft maintains the following properties, which together form the log matching properties, as shown in 3:
If two entries in a different log have the same index and term value, they store the same command.
If two entries in a different log have the same index and term values, the previous entries in the log are the same.
The first property follows the fact that a leader creates at most one entry at a given term and log index position, and that the log entry never changes its position in the log. The second attribute is the guarantee of a simple consistency check made by Appendentries. When sending an appendentries RPC, the index containing the entry in the log and the leader of the term are immediately created before this new entry. If follower does not find entries for the same index and term in the log, it rejects the new entry. The consistency check is divided into the following steps: The initial state of the log is empty, the log matching attribute is satisfied, and the consistency check preserves the matching properties of the log extension. As a result, whenever Appendentries returns successfully, leader knows follower's log and it is consistent through the new entry.
The logs of leader and follower are consistent during normal operation, so the appendentries consistency check never fails. However, leader crashes can make the log inconsistent (old leader may replicate log entries incomplete). These inconsistencies exacerbate the collapse of leader and follower. Figure 7 shows how the follower log might differ from the new leader. Follower may have fewer entries than leader, or more, or both. Missing or redundant entries in the log can result in multiple term spans.
raft, leader resolves inconsistencies by forcing the follower to copy its own logs. This means that the log entries in the follower will be overwritten by the leader. Section 5.4 will show that one more limit is more secure.
To ensure that follower logs are consistent with their own, leader must find the most recent log entry in which two logs agree, delete all entries in the follower log, and send leader entries to follower. These actions will occur in a consistency check initiated by appendentries RPC. Leader maintains the index of Nextindex, the next log entry sent to follower. When leader is onstage, it initializes all the Nextindex values to the index values of the most recent entry in its log (Figure 7, 11). If a follower log and leader are inconsistent, the appendentries consistency check will fail on the next appendentries RPC. After rejecting, leader decrements the Nextindex value and tries to appendentries RPC again. Eventually Nextindex can find the points that match the leader and follower logs. When this occurs, appendentries succeeds, eliminating the conflicting entries in the follower log and adding, if any, from the leader log. Once Appendentries succeeds, the follower log and leader are consistent, and it will remain in this state for the remainder of the term.
Consistency Algorithm Quest (Extended Version) 5
Start building with 50+ products and up to 12 months usage for Elastic Compute Service