Casper consensus algorithm for Rchain

Source: Internet
Author: User
Tags ack repetition

Rchain's Casper consensus algorithm is based on Vlad Zamfir's correct-by-construction consensus protocol and the CTO Greg Meredith and other Rchain members discussed. They also developed a simulator for Casper: Https://github.com/rchain/Casper-Proof-of-Stake/tree/simulation-dev.

1. General Predictive Security Protocol

An estimated security protocol requires the following:

1) A set of possible values for consensus C

2) A logical LC that is used to determine whether the proposition of the element declaration in set C is correct or false

3) A classification, ∑ represents the protocol, the object in the ∑ is protocol state and the Protocol execution of the state firing

4) A function called the evaluator, ε, used to map the protocol state to a command in logic

We assume a proposition that P is a protocol state that has been estimated to be secure, and q is that all possible future states are identified by the estimator as P.

Given a few reasonable constraints and estimator in logic, we can get the following security consensus results:

1) If Q1 and Q2 have a common future state, their security projections cannot contradict each other (if P is safe, non-p is definitely unsafe). Therefore, it is possible to reach a consensus if the node is not able to reach an irreconcilable state according to the behavior of the estimator. Furthermore, once a node sees a proposition that is safe, it knows that the final publicity results will remain correct.

2. Rchain's Predictive security protocol

The rchain expands on a common predictive security protocol to meet its own needs while still preserving the desired consistent security results.

2.1 The Consensus Values

The collection C will include all possible block Dags (blockdags). In contrast to the chain structure, the DAG structure is due to the fact that some parts of the consensus protocol require multiple parent block pointers.

The following properties are required in the context of a chunk:

1) pointer to parent chunk

2) Political Capital (PC)

3) data

4) Principles (pointers to other chunks, blocks that are seen by the authenticator when they are created)

The data that a chunk contains depends on what type of chunk it is, and what is most relevant to the user will be the chunk that runs on the RHOVM that contains the planned transaction, but what is most relevant to the current discussion will be the block containing the "acknowledgment" of the other chunks in the DAG. These chunks play an important role in the consensus agreement. This data may also enforce "cut conditions", thereby punishing the authenticator who generated the invalid block.

2.2 The Logic and propositions

Second, we need to define the logical LC we use to talk about Blockdag. Here, it seems to be the simplest and wisest to think of the nature statement as true or false. For example, "chunks are in a dag" or "chunks have parents". In the end, our most concerned statement will be the former, because this is our "fork selection rule". (for rigorous mathematical processing, we need to have a more precise description of the logic details, but the intuition above is enough for us.) )

2.3 The Protocol ' s specification

Now we are going to give a definite protocol specification, which mathematically we think is a category ∑. The State of the protocol is {A, P, H} from a set of tuples. a∈{"Propose", "acknowledge"} is the expected behavior, P∈r is the balance of political capital conditions, H is received historical news. The message includes the user's request to execute the smart contract and chunks from other validators. Those messages that contain chunks must have only the signature of the authenticator that created it. The actions performed by the protocol are "change intent", "execute" and "Receive Message" (and "do nothing" action and any combination of various actions, as this is a category of requirements).

We need to define the Predictor ε before giving details of how each protocol performs to update the state of the Protocol. As mentioned earlier, the estimator is most concerned with the logical propositions that relate to which blocks in the DAG. ε is a fork selection rule, which selects the selected chunks in multiple possible substitution blocks to continue the structure. In the blockchain protocol of the work proof it is the head of the chain of the maximum workload. Here we choose the greedy heaviest observed sub-tree (GHOST) algorithm, which selects the highest-scoring chunks to continue the structure. A chunk of the score B, about the message history, H, which is the sum of the weights of the chunks in the DAG's recent message from its sender.

A chunk of the weights of the company can be defined with the following companies:

Where f is the parameter of a protocol, 0<f<1, ACK (b) is a block of acknowledgment B, and PCA (b) is the number of political capital connected to B.

Here is a specific example:

Suppose there are 3 validators: A, B, C, Let's say from a point of view, suppose she has a consistent message history:

In the diagram, the chunks are aligned on top of their creators. Each chunk is labeled with its name and the number of political capital, and the arrows indicate the parent chunk pointer.

In addition, the tag labeled "ACK" also indicates that the acknowledgment data in the target chunk exists in the source block. Message history shows that the latest messages from each sender are A-b6, B-B5, C-b4.

So only these blocks have a score of not 0, according to the company above, W (b6) = 3, W (b5) = 4f2, and W (b4) = 2f. Then we traverse the entire DAG and we calculate the score:

Score (B6) = 3 +4f2 + 2f, score (B5) = 4f2 + 2f, score (B4) = 2f. So B6 has the highest score, and future chunks created through a must be built from B6 unless it receives a message that changes the score.

Note that if we review a message (before creating B6), score (B5) = 4f2 + 2f + 2 (+2 is from B1, it has a recent message, B6 No) it has the highest score, which is why B6 was built after B5.

It is also important to note that B3 and B4 can be thought of as "promoting" a block of two validators, B5 as the first two non-conflicting blocks "connected" into a block, so that the structure can continue to expand, without leaving any of the confirmed consistent blocks.

We go back to the execution of the specified protocol, and changing the intent agreement execution only changes the expected behavior, and the behavior triggers the current expected action. There are two possible actions to take:

1. Proposal (propose), Create a transaction plan based on one or more unhandled smart contract requests in the message history, creating a chunk that requires depending on the name of the parent block involved in the contract, the Ghost Fork rule, the number of political capital (chosen by the validator, it is important to note that the greater the weight of the chunk in the political center, the more likely it is to be chosen by the Ghost Fork Rule), the plan to create the data, All chunks that can be justified from the message history (one of the optimizations you can do here is to select only those relevant chunks).

2. Acknowledgment (acknowledge), promotion or connection. Select any number of independent blocks of the latest message in the message history from the sender (a chunk is an option, it will be "promoting"). The meaning of independence is that neither the Dag connections can be reached from another, and any data that is contained in the part of their disconnected Dag does not conflict. Then create a new chunk that needs the same things as the proposal.

The effect of two execution on the state of the Protocol is to add the generated block to the message history ( The authenticator immediately receives all messages sent to the network ), reducing the balance of political capital in chunks. The increase in the balance of political capital is confirmed in chunks based on the recursive formula.

PCE (b) is the amount of political capital earned by Chunk B. It is important to note that the latest message constraints will organize it to increase the political capital of the old chunks.
A single validation rule can also prevent the repetition of the rapid "political capital mining" that is filed in the same block.

The final protocol is executed, receiving the message, and not as simple as it has just begun to behave. Of course, it adds new messages to the message history, but it's also important that the message is verified in this action.
This step is necessary to ensure the integrity of the Blockdag. In fact, because this is an untrusted system, all validators need to independently verify all the messages that it receives.
When you receive a new message, there are several different things to verify:

1) The message does not produce ambiguity. The definition of ambiguity is that two messages from the same sender cannot prove another message. That is, the two messages do not appear on the grounds of another message (or recursively for these reasons). This is a Byzantine failure because it indicates that the sender behaves as if it were running two separate versions of the protocol. Otherwise, the sender's latest message will contain all the past messages from the sender for their reasons.

2) If a message is a chunk containing a transaction, all its transactions are legal. That is, the smart contract has not been executed by the previous chunk, and the use of transactions to update the virtual machine state succeeds without errors. For example. Transactions that result in double-spending should be wrong and therefore invalid.

3) If the message is a confirmation of the connection type, then the chunks are independent of each other and are not previously confirmed by the same sender. The same transaction does not exist in more than one acknowledgment block.

4) If the message is a cut block, it does correspond to a real violation.

5) If the message is any type of acknowledgment, then all the blocks identified are valid. In other words, this bullet makes validation a recursive process.

If any of the above conditions are violated, then this message is invalid. This violation will be reported. In this case, a new "cut" block is created that punishes the offending sender of the invalid message.

The final part of the agreement is the concept of the end, that is, how do we know that this chunk will exist permanently in the Dag? The current blockchain uses the depth of the chunk as the final proxy, and no chunks are technically over. The depth of a chunk is used here to see if a chunk is still part of the main DAG, but we can also introduce reasonable "synchronization constraints" to have a real end. A validator (or user) who has been in the history of a message for a period of time t (say 1 weeks), thinks a chunk is over. This means that the concept of finalization is relative, and it depends on the receipt of the message, but in practice it does not pose too much of a problem when the time limit is long enough. If two nodes complete mutually exclusive chunks, this does allow the likelihood of a consensus failure, but this is not possible when the time window is long and because of the incentives discussed below.

2.4 Economic considerations:

As with other blockchain, the verifier of the proposed trading block also includes some transactions that reflect the "costs" (in rev form) obtained by the verification man performing the calculation. As a result, the verifier is motivated to ensure that the chunks they create remain in the primary DAG, otherwise their rewarded trades will be lost. Ghost Branch Selection rules cause more weighted chunks to end up in the main DAG, so it is more likely that the verifier who attaches a lot of political capital to the blocks they create will keep them tipped. Therefore, the cost of political capital when proposing chunks is an economic incentive. Moreover, as political capital is incorporated into consensus agreements, the only way to earn political capital after spending is to recognize other blocks. The chunks that are admitted to the block (their proposed chunks have no political capital, the score will be lower than the original, because F<1, and Ghost will only consider the recent message) is most likely to be used to expand the DAG, because they score relatively high. Therefore, pushing blocks is good for everyone, because the promoter has access to political capital for future use, and the promoter may be tipped. Therefore, connecting chunks is more motivating, as it sums the benefits of all personal promotions.

Benefits of personal behavior:

1. Automatically turns the verification person. The best way to earn political capital is to spend it from other validators, the flow of political capital (and therefore the ability to raise obstacles) between different validators;

2. Reduction of fork. Drive blocks and merge chunks to work to maintain a single primary dag.

2.5 Slashing:punishments for Invalid blocks

The verifier who produces the invalid block should avoid him doing so again, so the punishment should include reducing their political capital. In fact, allowing the reduction of political capital through punishment can prevent a bad verifier. However, the principle of reconciliation is always possible, because the verifier can still obtain political capital and, if necessary, slowly return their balance to more than 0.

Details of the exact amounts of the various violations have not been worked out.

2.6 The one free parameter in the protocol F

What should be the value of f? Will the value of f change over time because of the number of participants in the network? These problems also require more validation through the simulator.

2.7 initiating the Protocol:where do the first validators and political capital come from?

Since political capital can only be driven by blocks that already have political capital, a natural question is where the first political capital comes from. One solution is to have some political capital attached to the Genesis block.

All validators need to advance at least one chunk in order to increase the political capital (the initial value is 0). The single recognition rule prevents this exploitation from gaining unlimited political capital. Note that even if a validator person creates an infinite chain, it starts pushing chunks from the creation block, but the geometry series converges to a finite value. In addition, a single validation rule prevents multiple such chains from being created.

A related question is, who will become the authenticator? A simple answer to anyone. After each user joins the network, its political capital is 0, they cannot propose to create new chunks, but they can participate in the validation process. Only users who participate in the consensus process can gain access to political capital. We ensure that only those who are genuinely interested in the Internet will be able to contribute to future chunks, while still giving everyone a chance to reach that level.

2.8 Bad behaviour technically allowed in Present Protocol specification

1, put forward a block, you immediately to promote it. This will prevent others from immediately admitting your proposed chunk. This can reduce the political capital that the opponent obtains from your proposal. It does reduce the score of the branch you are building (because the score only considers the recent message, and the upgrade chunk's weight will be lower than the original), so there is a possibility that your chunk will miss the fork selection rule. Therefore, it is unclear whether the strategy of monopolizing political capital in this case will succeed.

2. Overwhelm the network with meaningless confirmation, trying to crash smaller nodes. For example, write a script that creates an infinite chain of confirmed chunks based on the creation block, similar to a DDoS attack. As mentioned above, this kind of attack is not encouraged by the system itself, but for some external reasons, some people want to take down rchain, this is an conceivable method.

A simple (albeit inconvenient) way to go this will verify that the person has to enter a verification code before issuing the confirmation. Another option is to have a built-in firewall that automatically screens out other validators when the attack is detected.

2.9 Future Optimizations

A solution that allows conflicting chunks to be resolved is to identify the shards of the separate chunks while the blocks are being presented.

Other Notes

1, a verifier left for a few months, and then come back to do? How do I re-establish trust in a network that is not compromised?

Political capital is obtained through active activities in the process of consensus-building and is therefore a representative of trust. This problem is a good argument for political capital to have some "half-life". This means that when the verifier leaves, he needs to work hard to get back to his former position. This is still under discussion about how long this half-life is.

2. Do we have a way to solve the prisoner's dilemma?

Transaction revenue (instead of validating each shard that shares cross-shard state) is a prisoner's dilemma. prisoner dilemma, as a game of infinite repetition game. It has an optimal default cooperation policy, and even forgives a betrayal before losing trust. We have simplified this game, and the trusted namespace will be handled well, and untrusted namespaces are separated from the rest of the network. We'll also have some authenticator across the namespace to make sure that the entire network is not smelly by some garbage players.

3. Is it possible to manipulate consensus agreements to get free storage? That is, because the history of consensus needs to be stored as evidence permanently, whether that information can be used by clients without the need to pay for the appropriate storage costs.

This is possible by setting the data format in the confirmation block to avoid this situation.

This article was translated from: Https://rchain.atlassian.net/wiki/spaces/CORE/pages/92536846/Casper+for+RChain

Translation level is limited, my own understanding of the blockchain is also limited, if there are errors, please contact me in time, we improve together, thank you!

Related articles:

1, Https://medium.com/rchain-cooperative/a-visualization-for-the-future-of-blockchain-consensus-b6710b2f50d6

2, Https://github.com/ethereum/research/blob/master/papers/cbc-consensus/AbstractCBC.pdf

3, Https://github.com/ethereum/research/blob/master/papers/CasperTFG/CasperTFG.pdf

For the blockchain technology interested in children's shoes can be added QQ group: 711399035. We study blockchain technology and invest in good blockchain projects.

Zen Yuhai

Reprint please indicate the source, thank you!

Casper consensus algorithm for Rchain

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.