Vitalik Buterin: Minimization of penalty conditions in Ethereum Casper

Source: Internet
Author: User
Tags commit hash prepare require time 0
Lola 2017-03-07 12:20 Posted in Technical Guide 2 990

Last week, Yoich published a blog post detailing my "minimization of penalty conditions" and formally demonstrated the safety and vitality offered by "minimizing penalty conditions". This is a key component of the Byzantine fault tolerant algorithm, secure asynchronous (Safe-under-asynchrony), and cryptographic economic security (cryptoeconomically) consensus algorithms, while the Byzantine fault tolerant algorithm, secure asynchronous and cryptographic security consensus algorithms, It is also at the heart of my road map for proof of rights. In this article, I would like to explain in more detail what this algorithm is, how it is significant, and how it generally applies to the study of proof of interest.

One of the key objectives of the Casper study is to achieve the "economic certainty", "economic certainty" which we can define in half formally as follows:

If a customer proves (i) that the block B1 will always be part of the canonical chain, or (ii) the participants in the block B1 are guaranteed to be punished with an equal X dollar, then the chunk B1 in the economy is ultimately determined, and the margin of the security of the password is x $.

Let's imagine X equals $70 million. Basically, if a chunk is finalized, then the chunk becomes part of the chain, and the fact that you want to change it is very, very expensive. The workload proves to be less than this effect, which is also a unique feature of the proof of rights. Our intention is to make the 51% attack very expensive, and even if most of the validators work together, they cannot roll back the identified chunks unless they are willing to suffer great economic losses-the economic loss is so great that when the attack succeeds, it increases the price of the underlying cryptocurrency, Because the market reacts strongly to a reduction in the total supply of coins, instead of enabling an emergency hard fork to correct the attack (see an in-depth overview of basic philosophy).

The economic certainty is accomplished in the Casper study by requiring the verifier to pay a margin to participate, and if the agreement determines that the verifier violates a series of rules, the authenticator's margin will be taken away (the penalty condition).

A penalty condition may be like this:

If a validator sends signature information for a form:
["PREPARE", Epoch, HASH1, Epoch_source1]
Signature information for a form:
["PREPARE", Epoch, HASH2, Epoch_source2]
When HASH1! = HASH2 or Epoch_source1! = Epoch_source2, but the value of epoch is the same in two messages, then this authenticator's margin is canceled (ie: delete)

The protocol defines a series of penalty conditions, and an honest verification node follows a protocol that guarantees that no conditions are triggered (note: Sometimes we say "violate" a penalty condition, his synonym is "trigger", the penalty condition as a law, you should not violate the law). Never send a prepare message 2 times, which is not difficult for an honest authentication node.

"PREPARE" and "COMMIT" are terms borrowed from the traditional Byzantine fault-tolerant consensus theory. Now, let's assume that they represent two different types of messages, which we'll cover in the following protocol. You can argue that the consensus agreement requires two different rounds of agreement, in which prepare represents a round of agreements, and commits represent the second round.

There is also another deterministic condition, which he describes when the client can determine that some particular hash block is a definite number. This is easy to understand and we continue to express the only definite conditions in the current version of the Casper study.

If some particular epoch has a list of signed messages, then a hash is determined.

["COMMIT", Epoch, HASH]

If you add the credit balances of the authenticator who created these signed messages, the amount you get is greater than 2/3 of the total deposit balance for the current active authenticator collection.

As shorthand, we can say "hashes are submitted by Two-thirds validators at a given time." ", or" the hash is guaranteed at a specific time period of two-thirds. "

The penalty conditions need to meet the following two conditions: Responsibility security: If two conflicting hashes are finalized, then at least one-third of the authentication nodes can be verified to violate the reasonable activity of the penalty: a set of messages sent by at least two-thirds authentication nodes must exist. Some new hash blocks must be identified without violating the penalty conditions, unless at least one-third of the validators violate the penalty.

The safety of responsibility brings us to the idea of "economic certainty", and if two conflicting hashes are determined (for example, bifurcation), then we have a mathematical proof that a large series of verification nodes violate some of the penalty conditions, We can submit this evidence to the blockchain and punish the verification nodes that violate the penalty conditions.

Reasonable activity basically says "the algorithm should not be ' stuck ', and should not be sure of anything".

To illustrate the meaning of these two concepts, we can consider two kinds of toy algorithms, one that satisfies security but does not satisfy the activity, another that satisfies the activity but does not satisfy the security.

Algorithm 1: Each validator node has an opportunity to send a message in the form of ["COMMIT", hash], if the validator node of Two-thirds sends a COMMIT for the same hash, then the hash value is determined. Sending two commits is a violation of the penalty conditions.

Here, there is a clear evidence that the algorithm is safe: if both HASH1 and HASH2 are determined, then each of the two hashes can be shown to have at least a two-thirds commitment, so there must be one-third overlap, So one-third of the authenticator nodes have been cut (slashed), but he is not real alive: if there are One-second commits a, there are One-second commits B (a thing that can happen completely reasonably), then one-sixth of the validator nodes must voluntarily cut themselves to determine a hash value.

(Security verification)

Algorithm 2: Each authentication node has an opportunity to send a list of information ["COMMIT", hash, epoch], if two-thirds of the authentication node in the same epoch sent to the same hash a COMMIT, then this hash is determined. Sending two commit messages to different sender at the same epoch violates the penalty conditions.

This solves the problem in the previous algorithm, as we get 50/50 and half in the same epoch, then we try again at the next epoch, but this introduces another security flaw: two different hashes can be done at different times.

It turns out that it is possible to get two hashes at a time. But this is extraordinary and needs to meet four penalty conditions, plus 1000 lines of code to Yoichi complete to formally prove that he is actually feasible.

These conditions of punishment are:

[Commit_req] If a validator node sends a list of signature information:
["COMMIT", Epoch, HASH]
Except for some specific value Epoch_source and-1 <= epoch_source < epoch, list messages
["PREPARE", Epoch, HASH, Epoch_source]
These have been signed and broadcast by 2/3 of the authenticator, so the authenticator's deposit has been deducted.

In simple English, sending a request requires two-thirds of the validator nodes to be ready.

[Prepare_req] If a validator node sends a list of signature information:
["PREPARE", Epoch, HASH, Epoch_source]
When Epoch_source! =-1, unless some specific value epoch_source_source, there are -1 <= epoch_source_source < epoch_source message list
["PREPARE", Epoch_source, Ancestor_hash, Epoch_source_source]
When Ancestor_hash is (Epoch-poch_source), the hash ancestor has been signed and broadcast by 2/3 of the authenticator, then the authenticator's deposit is deducted.

If you want to be prepared to specify a specific previous period in advance, then two-thirds of the verification nodes need to be prepared in advance, and these preparations must point to the same previous period (for example, two-thirds of the verification node in the time 41 ready to point to the time 35 is OK, A verification node of Two-thirds in the time 41 had only half ready to point to the time 35 had the other half ready to point to the time 37 was not possible, the verification node of the five-sixths in the time 41 was ready, of which there were four-fifths points of time 35 was also possible, since five-sixths of the four-fifths was two-thirds, So you can ignore the remaining one-sixth.

"Nth Order Ancestor" ("n-th degree ancestor"), in the meaning of the blockchain hash, is the ancestor's meaning, in which, for example, the Ethereum block No. 3017225 is the No. 3017240 15-order ancestor block of the Ethereum block. Note that a chunk can have only one parent, so there is only one nth-order ancestor for a specific number n.

If the validator sends a list of information [prepare_commit_consistency]

["COMMIT", Epoch1, HASH1]

There is also a list of ready information

["PREPARE", Epoch2, HASH2, Epoch_source]

When Epoch_source < Epoch1 < EPOCH2, and then whether or not HASH1? = HASH2, this validator has been cut.

When you make submissions at a certain time, you can clearly see that two-thirds of the preparations are taking place at a certain time. So any preparation you prepare in the future should do better to guide this period or the renewal of things.

If an authenticator sends a list of messages [No_dbl_prepare]

["PREPARE", Epoch, HASH1, Epoch_source1]

Have a list of information

["PREPARE", Epoch, HASH2, Epoch_source2]

When HASH1! = HASH2 or Epoch_source1! = Epoch_source2, but the value of epoch is the same in two information lists, the validator is cut.

Impossible to prepare twice in a single time

Through these four penalty conditions, it can be proved that reliable safety is ensured while reasonable activity is feasible.

Note that using the above rules, two different hashes can be finalized-if both hashes are part of the same history, they can be finalized and, in fact, When more and more hashes are being determined at the end of this growing chain is exactly what we want.

Left: Different hashes can be determined under these rules, but if they are all part of the same chain, they run as expected. Right: hashes that do not belong to the same history are conflicting; there is evidence that the above four penalty conditions can organize two conflicting hashes to be finalized, unless at least one-third of the validators are in the hands of the sword.

Now, let's put these things together. There is a pool full of validators (anyone can join freely, although there are some delays that require deposit deposits, of course any participant is free to leave and then withdraw their funds after a higher delay), and these validators have the right to sign and send list information.

["PREPARE", Epoch, HASH, Epoch_source]

["COMMIT", Epoch, HASH]

If there are enough commit messages for this hash at a particular epoch, the hash is finalized. The hashes are concatenated together, each of which points to some of the previous hashes, and we want to see that as time increases, the older and more growing chains have more and more new Hashiki joined together and finalized. We have added financial incentives to validators to encourage them to send prepare and commits in order to send enough messages in time to finalize the finalization.

In general, you can use any of the Byzantine fault-tolerant consensus mechanism algorithms that have "guaranteed activity under synchronization, guaranteed security under asynchronous" pbft, and transform him into a set of penalty conditions to give you reliable security and reasonable activity. The above conditions are inspired by the combination of the Byzantine fault-tolerant consensus mechanism (PBFT) and tendermint, but there are other points that may produce different results.

Please note that reasonable activity and actual activity are not the same thing; reasonable activity means that in theory we can always make a final determination of something, but he may also always be the case-we are always very unlucky, always repeating repeatedly never ending to determine anything. To solve this problem, we need to come up with a proposal mechanism and make sure that the proposed mechanism has such a property that it needs to actually be able to achieve the purpose of helping us to be active.

The proposed mechanism is the mechanism for proposing hashing, and the remainder uses commit and prepare messages to try to finalize the final determination. This mechanism is sometimes flawed, and he does what is needed to punish the condition that, although the proposed mechanism is flawed, it can also be effective in guaranteeing security, and once the mechanism is proposed to stop and malfunction, this agreement will make the matter final.

In many traditional Byzantine fault-tolerant consensus mechanism algorithms, the proposed mechanism is closely related to the algorithms in other parts. In the Byzantine fault-tolerant consensus algorithm, each view (roughly equivalent to an ERA) is assigned to a single validator, and the validator is free to offer what they want to propose, and this validator may not propose any proposal, or propose an invalid hash, or produce multiple hashes to create undesirable actions, But other parts of the Byzantine fault-tolerant consensus algorithm will ensure that none of these operations are fatal, and that the algorithm automatically switches to the next period. Here, we can actually combine our punitive conditions with many other proposed mechanisms, and only need them to meet some of the conditions.

First, the proposed mechanism must be able to propose only one hash at a time, and the hash must be a valid hash (the validity condition can be very complex, in the case of Ethereum, Ethereum involves the validation and execution of the ether-ring State conversion function, and the validation of the design data availability).

Second, the hashes must form a chain , that is, a hash submitted in time N must have a parent, the father must be a hash submitted during the N-1 period, and the second ancestor must be a hash submitted during the N-2 period.

Third, the hash must be a hash, and the penalty condition does not prevent the validator from finalizing the hash. This can be said to be very subtle. Now let's consider a scenario in which the proposed mechanism proposes a hash of 0, in time 1, the proposed mechanism proposes a hash of 1, (the direct sub-hash of the hash 0), and for whatever reason, none of the hashes in the hash have been sufficiently prepared to obtain the commit. Well, this proposed mechanism (due to some temporary failure) presents another hash of 0 for time 0, which requires two-thirds preparation and One-second commits.

Now, there are two options for this proposed mechanism. The first possibility is that it may present HASH2 (HASH1 's sub-hash), and then HASH3 (HASH2 's sub-hash) and so on. However, the penalty condition ensures that no hash can be committed without one-sixth of the authenticator being the sword hand. Another possible, and probably correct, possibility is that he submits HASH1 (HASH0 's sub-hash), while expecting that the hash may never be determined, because his competitor HASH1 is already more than One-third ready, so HASH1 is not ready for the two-thirds he needs, The HASH2 (HASH1 's sub-hash) is then presented, HASH2 can be submitted, and the proposed mechanism can continue to propose a new hash, each of which is a sub-hash of the previous hash

A direct idea is that some people may have: we can make a traditional chain of proof of work, followed by the longest chain of rules to put him as a proposed mechanism. Every 100th block is a checkpoint, where the chunk hash of section n*100 becomes a proposal for the period N. But this does not guarantee that this will continue to work, because in the above scenario, the proposed mechanism will attempt to commit the HASH2 instead of the HASH1, so he will never finish any hash of the final determination (this is not what we call "stuck", because wanting to get rid of such situations is not necessary to have anyone be cut, But it does require the miners to collude on the chain that contains the HASH0, even if the chain containing the HASH1 is longer than HASH0 in the proof-of-work theory. What we can do, however, is to use a traditional workload to prove that the blockchain is combined with rules that can modify the fork selection.

The fork selection rule is a feature that is generated by the client evaluation, which takes the collection of chunks and other messages that have been generated as input, and outputs to the client what is a "canonical chain", "the longest effective chain is the right chain" is a simple fork selection rule, which is very well suited to proof of work, The ghost of Zohar and Sompolinsky is a more complex example. We can define a fork selection rule, which allows the blockchain to act as the proposed mechanism for the consensus mechanism algorithm, and has the above attributes, as shown below: Starting head is a cause finding head has two-thirds of the preparation and maximum number of commits effective descendants set head to be equal to his descendants, Then return to the second step when step 2 can no longer find a descendant with Two-thirds ready and any commits, use the blockchain's underlying fork selection rule (whether the longest chain, ghost, or other) to find the hint.

Please note that in our example above, this will be more beneficial to HASH0 than HASH1, so this can lead to expected behavior. Also note that if there is a definitive chain, then he will always choose the chain which is ultimately determined.

The above penalty rules ensure that a particular type of fault generation can be very expensive: the final restoration of the fork. However, there are other types of failures that are not resolved by this penalty rule. In particular, to determine an invalid hash, and to determine a hash that represents a chain that contains invalid data. At present, the simplest and most implemented way to solve the problem of absolute cryptographic economic security is to implement full node-download and validate all chunks so that you can completely ignore invalid chunks. Therefore, it is determined that the hash is not finalized and can be performed in two steps: 1) Check the Two-thirds submission 2) keep checking the chain until you can prove that the hash is valid.

There are two ways to make the most final-determined value for a light client. The first way is to add another type of node to send the message (like ["attest", hash, epoch]), which has the effect that if the message is submitted to a chain, and the hash in this case is actually a hash at that time, then this validator will get a portion of the reward, If this hash is not the hash of that period, then they will be punished very much. Therefore, the validator will only send the message when it determines that the given hash is part of the canonical chain that the client sees, and will continue to do so forever (the validator does this in a good time after they personally fully authenticate from the blockchain to the hash and check their two-thirds commit).

The second is to give the light client access to a variety of cryptographic economic technologies, allowing them to be very effective in verifying the availability and effectiveness of the data with the help of a few assumptions. This may involve a combination of erasure codes and interactive authentication. This is similar to the study of sharding, and the connection between the two is very close-the first method above requires the validator itself is a complete node, and the second method does not need the validator itself is a complete node, and the Shard itself is about creating a blockchain, No one in this blockchain is a complete node, and the post on this topic will continue to be updated in the future.

Special thanks to Yoichi Hirai, River Keefer and Ed helped me to examine this article again.

Note:

1. Some people may think that the workload proves to have an economically decisive and safe margin R * k, where R is the chunk reward, K is the number of chunks that are recovering, but this is not true, if you have successfully implemented a 51% attack then you will get compensation for the block reward-51% of the budget requirements of the attack really roughly R * k, But the cost of your successful attack is 0.

2. The one-third ratio is taken from the traditional Byzantine fault tolerance theory and is chosen to maximize fault tolerance in the overall level of selection. In general, you can use any t> 1/2 value in this article to replace the two-thirds penalty conditions presented in this article. You can calculate the fault tolerance from the activity angle to 1-t (because if more than 1-t offline, you can't get T), from a security standpoint, the degree of fault tolerance is 2t-1 (if T can finally determine the a,t can finally determine the B, then add up 2t> 1, so at least 2t-1 must be duplicated). t = 2/3 maximizes the minimum value of two (1-t = 1/3,2t-1 = 1/3); You can also try T = 3/5 (activity: 2/5 Fault tolerance, security: 1/5), or T = 3/4 (activity: 1/4, Security: 1/2). Also note that the value T < 1/2 will make sense in terms of how many nodes might be offline, but in solving these situations, and the broader concept of subjective final thresholds, is another topic that I will delve into more deeply in another article next time.

3. Of course, unless the margin has been withdrawn, this ("ranged attack") will be the subject of another article I wrote, although today's article was supposed to be published two years ago.

4. At least, as long as he uses the hash and the signature to unite the encryption. This can be more difficult if it uses a threshold signature. Because the Federation of 2/3 Rogue nodes can forge the signature of any other participant.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.