Reading papers about distributed Replication

Source: Internet
Author: User

A. Practical Byzantine fault tolerance


1. What's its checkpoint?

We will refer to the States produced by the execution of these requests as checkpoints and we will say that a checkpoint with a proof is a stable checkpoint. when the replication code invokes the make_checkpoint upcall, snfsd gets all the copy-on-write bits and creates a (volatile) checkpoint record, containing the current sequence number, A list of blocks and the Digest of the current state. snfsd computes a digest of a checkpoint State as part of a make_checkpoint upcall. although checkpoints are only taken occasionally, it is important to compute the State digest incrementally because the State may be large. we can avoid sending the entire checkpoint by partitioning the state and stamping each partition with the sequence number of the last request that modified it. to bring a replica up to date, it is only necessary to send it the partitions where it is out of date, rather than the whole checkpoint. A request that has executed tentatively may abort if there is a view change and it is replaced by a null request. in this case the replica reverts its state to the last stable checkpoint in the new-view message or to its last checkpointed state (depending on which one has the higher sequence number ).


2. about its advantages and limitations


Advantages:
()To tolerate Byzantine faults, the paper proposes an algorithm which works in a asynchronous system like the Internet. previous systems, such as rampart and securering, rely on the synchrony assumption for correctness, which is dangerous in the presence of malicious attacks.
(B)Also, it uses an efficient Authentication Scheme Based on Message Authentication codes during normal operation; public-key cryptography, which was cited as the major latency and throughput bottleneck in Rampart, is used only when there are faults.
(C)The paper used the approach to implement a real service: a Byzantine-fault-tolerant distributed file system that supports the NFS protocol. and the system is only 3% slower than the standard NFS Daemon in the digital unix kernel during normal-case operation.
(D)It provides experimental results that quanhas the cost of the Replication Technique.
(E)Applying the read-only optimization to lookup improves the performance of BFS significantly and reduces the overhead relative to BFS-NR to 20%.
(F)If the primary has actually failed, the group will be unable to process client requests until the delay has expired. our algorithm is not vulnerable to this problem because it never needs to exclude replicas from the group.

Limitation:
()The algorithm does not address the problem of fault-tolerant Privacy: a faulty replica may leak information to an attacker. And they plan to investigate secret sharing schemes to solve the problem in the future.
(B)In this paper they assume that the client waits for one request to complete before sending the next one. But they can allow a client to make asynchronous requests, yet preserve ordering constraints on them.
(C)The overhead introduced by the replication library is due to extra computation and communication, such as executing cryptographic operations and an extra message round-trip.
(D)There is still much work to do on improving our system. one problem of special interest is cing the amount of resources required to implement our algorithm. the number of replicas can be implicitly ced by using F replicas as witness that are involved in the Protocol only when some full replica fails. we also believe that it is possible to reduce the number of copies of the state to F + 1 but the details remain to be worked out.
(E)Our approach cannot mask a software error that occurs at all replicas. However, it can mask errors that occur independently at different replicas, including Nondeterministic software errors.


3. others about the paper


It introduces the three-phase protocol in normal-case operation. The Protocol aims to totally order requests. Here is my question: does the three-phase protocol tolerate Byzantine faults? Or just to totally order requests?
Another question is about view changes: how to select the new primary? It says that when the primary P of view V + 1 es 2f valid view-change messages for view V + 1 from other replicas, it multicasts

 

Message to all other replicas. Is it related to the above question?
My third question is that I still don't understand what differences between it works in synchronous environment and asynchronous environment.
Maybe I couldn't rebuild the system in the paper just according to the paper, but there are some approaches worthy for me to learn. for example, about checkpoint, We cocould compute the State digest incrementally, and avoid sending the entire checkpoint by partitioning the Sate.



B. paxos made practical


1. What's its checkpoint?


Since view numbers are monotonically increasing, the combination of view-ID and timestamp, which we call a viewstamp, determines the execution order of all requests over time. when the primary has es a request from client, it will send a message to backups. and there is a field committed encoded. the field specifies a viewstamp below which the server has executed all requests and sent their resu Lts back to clients. these committed operations never need to be rolled back and can therefore be executed at backups. when the primary's reply to the client gets lost and the primary subsequently fails, the cohort can re-execute the request after a reboot. in short, the checkpoint is used to record what requests have been executed, so that once fails, we don't have to re-Execute from the beginnin G. My question is that: whether the whole request or one operation of the request does the checkpoint record?


2. about its advantages and limitations


Advantages:
()I think it build a clear view-change protocol. it considers the crashed cohort that fails to respond to message and the new cohort that may wish to join the system. and it is a multi-step process like paxos, that involves first proposing a new view-ID, then proposing the new view.
(B)The primary can temporarily respond to read-only requests without involving backups, if a majority of backups promise not to form a new view for 60 seconds. so that it cocould make the replication protocol more efficient.
(C)It employ a third machine, called a witness, that ordinarily sits idle without executing requests, but can participate in the consensus protocol to allow one of the other two replicas to form a view after a failure or network partition.
Limitation:
()In practical Byzantine fault tolerance, its system uses view changes only to select a new primary but never to select a different set of replicas to form the new view. while in paxos made practical, it uses view changes to form the new view. so I think the latter cocould learn from the former to reduce overhead.
(B)I believe it realizes consensus and view changes, but it doesn' t show the performance evaluation.


3. others about the paper


In fact, the paper aims to solve the two limitations of viewstamped replication:(I)It doesn' t show how to replicate a simple system;(Ii)It assumes that the set of possible cohorts is fixed over time. So the paper does it. It makes paxos practical and easy to understand.


C. Rex vs. Eve

Rex and Eve both aim at the multi-core server. in the paper of Rex, it often refers to Eve. so here I want to put them together and let them have a trail of strength with each other.


1. What's its checkpoint?


Eve is an execute-verify architecture, and execute every batch of requests concurrently. so the checkpoint shoshould be different from that of traditional SMR protocols. to achieve efficient state comparison and fine-grained Checkpointing and rollback, Eve stores the State using a copy-on-write Merkle tree, whose root is a concise representation of the entire state.
Rex resorts to a general Checkpointing framework to alleviate this burden. it doesn' t have the primary checkpoint periodically during its execution. checkpointing cannot be done on a State where a request has not been processed completely because Rex does not have sufficient information for a replica to continue processing an incomplete request when re-starting from that checkpoint. when Rex decides to create a checkpoint, the primary sets the checkpoint flag, so that all threads will pause before taking on any new request. when a checkpoint is available on a replica, any committed trace before the cut points PF that checkpoint is no longer needed and can be garbage collected.


2. about its advantages and limitations


Eve advantages:
()Eve will achieve a speedup of 6.5x compared to sequential execution with 16 execution threads.
(B)From the following figure (figure 8), I think Eve has a good ability to recover from failure.

(C)Eve outperforms Remus by a factor of 4.7x and uses two orders of magn0000less network bandwidth because it can ensure that the states of replicas converge without requiring the transfer of all modified state.
(D)To keep its Latency Low while maintain a high peak throughput, Eve uses a dynamic batching scheme: the batch size decreases when the demand is low (providing good latency ), and increases when the system starts becoming saturated, in order to leverage as much parallelism as possible.
Eve limitations:
()Not implementing extra protection mode Optimization for our asynchronous configurations.
(B)Our current implementation does not handle applications that include objects for which Java's Finalize method modifies state that need to be consistent guest SS replicas.
(C)Our current prototype only supports in-memory application state.
(D)As the workload gets lighter (the execution time per request reduces), the overhead of Eve becomes more pronounced.
(E)What is the ability to handle concurrency faults? If bug exists in a replica, Eve can detect it, And then fix it by rolling back and re-Execute sequentially; however, if bug occurs in two replica simultaneously, eve can't detect it.
(F)From the paper about Rex, handling on-disk state is tricky in the execute-verify model of Eve.

Rex advantages:
()Rex has at most one active consensus instance at any time. The demo-greatly simplifies the design of Rex.
(B)Eve's correctness depends on marking the states of machines correctly, while Rex's correctness depends on capture all the sources of non-determinism. Who is better? Compared to Eve, it is easy for Rex to find locks and Nondeterministic functions among requests, because Rex uses its own synchronization primitives, so that Rex only needs to replace the interfaces of the program with its primitives to find all the locks and non-deterministic functions. is it similar to ld_rreload?
Rex limitations:
()Rex introduces overhead in both execution of a primary for recording causal order and execution of secondary replicas for respecting that order.
(B)In Eve, replicas cocould execute independently, while in Rex, replicas cocould not, because they need to make the same Nondeterministic decisions to ensure consistency.
(C)It is hard to find data race. But Rex thinks more and more people will realize the danger of Data race.


3. others about the paper


Does multithreading execution have relation to Byzantine faults?
Eve realizes deterministic parallelism by mixer, as mixer will partitions the set of requests into non-conflicting batches. so that Eve can execute these batches concurrently without conflicts. while Rex is an execute-agree-follow model. at the beginning, it lets primary freely execute requests concurrently, meanwhile during its execution Rex will record the Nondeterministic decisions into a trace. Then other machines shoshould agree on the trace to ensure consensus. secondary replicas wocould execute requests concurrently according the trace. I think Eve and Rex both find the Nondeterministic decisions and conflicts among requests, and then let replicas agree on it to ensure consensus. my question is: do their approaches to capture the Nondeterministic decisions have the same overhead? Who will be better?
About Rex, since we cocould remove unnecessary causal edges to reduce overhead, so if the client proposes three requests, and through execute State, we find that they are independent, so there is no causal edge among them, They cocould be executed concurrently. my thinking is that whether can we know they are independent without executing, just like preprocessing, so that we cocould omit the execute stag E?









Reading papers about distributed Replication

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.