Ethereum Purple Book (Chinese version)
<ignore_js_op>
Vitalik Buterin Translator & Source: Linktimetech Public number
At the recently concluded Ethereum DevCon 2 conference, Ethereum founder Vitalik shared his latest research, the Ethereum Purple book, where we translated it into Chinese for everyone to read. The original version of "Purple book English" can also be found and read from our public number. Of course, such a cutting-edge and complex paper translation is a great test, if you feel that the following issues, please timely feedback with us, we will be in the post-public number of the article to correct. The full text of the thesis reads as follows:
<ignore_js_op> (Why is it called a purple book?) )
Objective
Over the past decade, projects such as Bitcoin, Namecoin and Ethereum have fully demonstrated the strong thrust of the code economic Consensus network on enabling the next generation of centralized systems, and have subtly expanded the development horizon from simple data storage and information services to back-office management of any State application. Based on this system, the global range of proposed and implemented applications covers global low-cost payment systems, financial contracts, market forecasts, identity registrations and real-world property rights, the establishment of more secure certificate management systems, and even traceability and tracking of manufactured goods through the supply chain.
However, the technical basis of the system still has serious efficiency problems. Because each full node on the network must maintain the state of the entire system and process each transaction, the entire block network efficiency is limited to a single compute node. Most systems now employ a consensus mechanism, a workload proof (POW) that consumes a lot of power to operate, and the largest work-block chain Bitcoin based on the POW mechanism consumes the equivalent of the entire Irish power consumption.
This article provides solutions for merging POS and Shard-based proofs for the above issues. POS itself is not a novel idea, the 2011 already exists, but the new algorithm shows the substantial benefits, not only to solve the shortcomings of the previous system, and even have the properties of the POW never.
POS can be imagined as a virtual mining. In POW mode, however, users spend a certain amount of money to buy a computer and then consume real power, which is roughly proportional to the cost of a random blockchain. In POS mode, the user spends money to purchase virtual tokens within the system, and then uses an internal protocol mechanism to convert virtual tokens into virtual computers, and the system simulates the cost of randomly generated chunks in proportion to the purchase cost, achieving the same block effect as the POW, without consuming electricity.
Sharding is not novel, in the current distributed database design has more than 10 years of application, but so far, the research has been applied to the blockchain, there are still a lot of restrictions. The basic path is to address the extensible challenge through the nodes in the global validator Collection in the architecture (in our case, as evidenced by the equity bond) being randomly assigned to a specific "fragment" where each fragment processes the different parts of the global state in parallel, ensuring that the work is distributed across nodes, Instead of repeating each node.
We aspire to achieve the following goals
- Improve efficiency through POS: The consensus mechanism should not be guaranteed by mining, thus significantly reducing power wastage and meeting the need for large-and continuous-release ETH.
- Fast out-of-block time: The block speed reaches the maximum without compromising security.
- Economic consensus: once chunks are made, and after a certain amount of time and event processing, most of the validators will ' The full commit ' that chunk, means they will lose all of their ethereum margin without a history of this chunk (think of 10 million of the value of the etheric currency), which is very much needed because it means that most collisions cannot be transmitted through the nodes or can be 51% attacks without destroying the etheric currency. The default verifier strategy is designed to be conservative and they are willing to make high-value commitments, and the risk of integrity validators should be low.
- Scalability: You should run the blockchain without having to run all the nodes, for example, in such cases, all nodes include verifying that the node keeps only a small chunk of data fragmentation, and then uses light client technology to access the remainder of the chunk. With the use of a single node processing capability, the blockchain can achieve higher transaction throughput, while running the block platform requires only a large number of ordinary personal computers, so it is also guaranteed to be centralized.
- Cross-shard communication: Building such a class of applications and implementing interoperability between applications is theoretically most feasible. The resource usage of this kind of application reaches a critical point to exceed the computing power and bandwidth limit of a single node, and is stored in different nodes and in different states respectively.
- Resistance Calculation review: This protocol is designed to withstand the combined attack of most malicious authentication nodes across all fragments, so that invalid transactions cannot be packaged into chunks and become part of the entire blockchain. To some extent, there can be a shutdown problem with Ethereum 1.0, but we can make this mechanism more robust by introducing the concept of guaranteed scheduling and ensuring cross-fragmentation information.
We start with a description of an algorithm, the algorithm achieves targets 1 and 2, then the second algorithm to achieve the target 3, and then to a certain extent in the third algorithm to achieve the target 4,5 (as a constraint, a node of the computing power of the square roughly proportional, such as (4) and a 24-hour delay across fragmentation information, It is possible to build faster messages as a layer on a double-use deposit, in example (5). The higher levels of satisfaction with targets (4) and (5), and (6) the same, made redesign 2.1 and 3.0.
Constant
We set:
- Block_time:4 seconds (aiming on the lower ambitious side to reduce overhead, for less ambitious targets to reduce overhead)
- Skip_time:8 seconds (aiming on the lower ambitious side to reduce overhead for less ambitious targets to reduce overhead)
- epoch_length:10800 (ie. hours under good circumstances,12 hours in good condition)
- async_delay:10800 (ie. hours under good circumstances12 hours in good condition)
- casper_address:255
- withdrawal_delay:10000000, ie. 4 months
- Genesis_time:some Future Timestamp marking the start of the blockchain, say 1500000000
- Reward_coefficient:3/1000000000
- min_deposit_size:32 ether
- max_deposit_size:131072 ether
- V_loss_maxgrowth_factor:32
- finality_reward_coefficient:0.6/1000000000
- finality_reward_decay_factor:1000 (ie. 1.1 hours under good circumstances)
- min_bet_coeff:0.25
- Num_shards:80
- validators_per_shard:120
The smallest POS
(Note that subsequent content assumes that the reader has a basic understanding of Ethereum 1.0)
We can create a minimum feasible POS algorithm without features such as finalization, additional anti-censorship, and fragmentation. There is a contract in address casper_address, whose main function is to track the changes in the Verifier collection. The contract does not have special privileges except that calling the contract is part of validating the chunk header process and is included in the creation block rather than being dynamically added through the transaction. The validator collection is initially set in the creation block and can be adjusted by the following functions:
- Deposit (Bytes Validation_code, bytes32 Randao, address withdrawal_address): Margin (bytes validation_code,bytes32 Randao, Address withdrawal_address)
Accept a certain amount of the etheric currency as a margin, the sender specifies a verification code (essentially the EVM bytecode, the main function of the code is used as a public key, so that subsequent other nodes to verify the block data signed by them and related network consensus messages), A randomly submitted transaction (a 32-byte hash is used for the authenticator's choice; see below) and the final address is sent. It is worth noting that withdrawals can be sent at a specific address, the only purpose of the contract function of the address is to release funds under specific address conditions, and if necessary, double margin. If all parameters are accepted, the authenticator will be added to the authenticator collection in the next period. (for example, if a margin request is extracted during the nth period, and the authenticator is added to the authenticator during the n+2 period, the period is equal to the Epoch_length block time period). The hash value of the validation code (called Vchash) can be used by the authenticator as the identification number, and the same hash value is forbidden for different validators. )
Startwithdrawal (bytes32 vchash, bytes sig):
Begins the recall process, requiring a signature that requires validation code through the authenticator. If the signature is passed, the authenticator is withdrawn from the authenticator set starting from the next period. Note that this function does not return the etheric currency.
There is also a function:
Withdraw (bytes32 Vchash):
Withdrawing the authenticator's etheric currency to the specified withdrawal address increases the reward reduction penalty as long as the authenticator has withdrawn from the authenticator activity set by using Startwithdrawal at least withdrawal_delay seconds ago.
To be exact, the verification code is like a hash code that is placed in the chunk header, plus a signature, if the signature returns 1 effectively, or 0 instead. This mechanism ensures that we do not lock the authenticator to any particular signature algorithm, but instead allows the authenticator to use the authentication code to replace the single authentication with the multiple private key verification signature, and also to use the Lamport signature against the quantum computer attack. This code is executed in a black box environment by using the new Call_blackbox opcode to ensure that execution is independent of the external state. This prevents some attacks, such as when a validator creates a validation code that returns 1 when it is in good condition, and returns 0 when the condition is bad (for example, Dunkle inclusion).
The value of the Randao parameter in the deposit function should be to calculate a long chain hash result, which is to set the secret random x, perform the calculation Randao = SHA3 (Sha3 (SHA3 (SHA3 (...). (SHA3 (x))). Each authenticator provides a Randao value to save the storage space in the Casper contract.
The Casper contract also contains a variable called Globalrandao, initialized to 0. This contract contains a function getvalidator (uint256 skips): Returns the code that skips skips after the authenticator. For example, Getvalidator (0) returns the first authenticator (the validator generally can create chunks), Getvalidator (1) returns the second authenticator (if the first cannot create a chunk, the authenticator can create it)
Each validator is selected from the currently Active verifier collection by a pseudo-random algorithm, randomly weighting the initial margin size and taking the Globalrandao value in the Casper contract as a pseudo-random seed. In addition to the signature, a valid chunk must contain the original image that the authenticator currently holds Randao. The image is then replaced by the Save Randao value, which is then saved to the Globalrandao value of the contract by an XOR operation. Thus, each chunk generated by a validator requires the Randao of one validator to be stripped off. This is a random algorithm based on the explanation (http://vitalik.ca/files/randomness.html) here.
In summary, a chunk must contain the following additional data:
<vchash><randao><sig>
Where Vchash is a 32-byte hash of the validation code that is used to quickly identify the authenticator. Randao meaning as described above (also 32 bytes), the sig is a signature and can be any length (although we limit the size of the chunk header to 2048 bytes).
<ignore_js_op>
The shortest time required to create a chunk can be simply defined as: Genesis_time + block_time * <block height> + skip_time * < All verifier hops and > after the creation block.
In practice, this means that once a chunk is released, the 0-skip authenticator for the next chunk is released after block_time seconds, so 1-skip Verifier is released after Block_time + skip_time seconds, and so on.
If a validator publishes a chunk too early, the other verifier ignores the chunk until after the specified time, the block is processed (further description and validation of the mechanism is described here (http://vitalik.ca/files/timing.html); short Block_ The asymmetry between time and long skip_time ensures that, under normal circumstances, the average retention period of chunks can be very short, and that the network can be secured in the event of longer network latency.
If a validator creates a chunk that is included in the chain, they receive a chunk bonus equal to the total number of etheric in the Activity Verifier collection during this period multiplied by reward_coefficient * block_time. Therefore, if the verifier always performs the duties correctly, Reward_coefficient essentially becomes the "expected yield per second" of the verifier, multiplied by ~3200 to obtain an approximate annual yield. If a validator creates a chunk that is not included in the chain, then at any future time (until the authenticator invokes the withdraw function) the chunk header can be included in the chain as a "dunkle" through the Includedunkle function of the Casper contract This allows the verifier to lose the amount equivalent to the Block award (as well as to provide a small percentage of fines to parties, including Dunkle, as an incentive). Therefore, the validator should actually create the chunk if it is determined that the chunk is more than 50% in the chain, and the mechanism does not encourage validation on all the chains. The cumulative margin of the authenticator, including bonuses and fines, is stored in the status of the Casper contract.
The purpose of the "dunkle" mechanism is to solve the "0 bet" issue in the proof of entitlement, where, if there is no penalty, only the reward, then the authenticator will be materially motivated to attempt to create chunks on each possible chain. In a proof-of-work scenario, it costs money to create chunks, and it is only profitable to create chunks on the "main chain." The dunkle mechanism attempts to replicate the economic theory in the proof of work, to create a manual penalty for creating blocks on non-primary chains, instead of "natural fines" for the cost of the CLP.
Assuming a fixed-size set of validators, we can easily define the fork selection rule: Calculate the number of blocks, the longest chain wins. It is assumed that the verifier collection can be larger and smaller, but the rule is less applicable because the speed of the forked fork as a few supports will be like most supported forks after a period of time. Therefore, we can use the calculated chunk number instead of the defined fork selection rule to give each chunk a weight equivalent to the block reward. Because the chunk reward is proportional to the amount of active verification of the ether, this ensures that the chain score of the ether is more actively verified to grow faster.
As we can see, this rule can be easily understood in another way: A fork selection model based on value loss. The principle is that we choose the chain where the authenticator bets the most value, that is, the validator acknowledges that all other chains will lose a lot of money in addition to the above chain. We can equate that this chain is the one where the verifier loses the least amount of money. In such a simple pattern, it is easy to see how this is simply the longest chain that corresponds to the chunk weight for the chunk reward. The algorithm is simple, but it is efficient enough for the realization of the proof of ownership.
Add Finalize
The next step is to increase the concept of economic finalization. We do this in the following manner. Inside the chunk header, in addition to the hash of the up block, the authenticator now makes an assertion about the probability of a previous chunk finalization_target finalization. The assertion is treated as a bet, such as "I believe the chunk 0x5e81d ... will be finalized, and in all chunks of history, if this assertion is wrong, I am willing to lose v_loss, assuming that this assertion is correct in all chunk history, I will get v_gain. "The authenticator chooses the odds parameter, and V_loss and V_gain are computed as follows (Total_validating_ether is the total amount of ether for the Active verifier collection, Max_reward is the maximum allowable chunk reward, bet_ Coeff are the coefficients defined below):
- Base_reward = finality_reward_coefficient * Block_time * total_validating_ether
- V_loss = Base_reward * Odds * Bet_coeff
- V_gain = Base_reward * LOG (odds) * Bet_coeff
- Base_reward = finality_reward_coefficient * Block_time * total_validating_ether
- V_loss = Base_reward * Odds * Bet_coeff
- V_gain = Base_reward * LOG (odds) * Bet_coeff
<ignore_js_op>
V_gain and V_loss values relative to the base_reward, assuming Bet_coeff = 1, assuming Bet_coeff = 1,v_gain value and V_loss value with Base_reward phase Related to Sex
Finalization_target starts from null, but during block 1, it is set to block 0. Bet_coeff the initial (start) value is set to 1, and the other variable Cached_bet_coeff is set to 0. But Bet_coeff cannot be reduced to less than Min_bet_coeff, in each chunk we set bet_coeff-= Bet_coeff/finality_reward_decay_factor,cached_bet_coeff-= Cached_bet_coeff/finality_reward_decay_factor (This ensures that there is always an incentive to drive to bet). When creating a chunk, the authenticator gets Base_reward * LOG (MAXODDS) * Cached_bet_coeff, where Maxodds is the largest possible odds, such as Max_deposit_size/base_reward. The mechanism is implemented as soon as a chunk is determined to be finalized, the verifier gets rewarded, just as they continue to verify with the maximum odds. This ensures that the verifier will not be subjected to improper incentives and conspire to get the maximum benefit by delaying the finalization of a chunk.
When the Casper contract determines that Finalization_target has been finalized (that is, the total value loss of a known chunk exceeds a certain threshold), we set the new finalization_target to the current chunk, and we set Cached_bet_coeff + = Bet_coeff, and the Bet_coeff is reset to 1. Starting with the next chunk, the finalization process starts again with the new finalization_target set. If there is a short chain split, the finalization process may handle multiple chunks at the same time, even blocks of different heights; However, given the default verifier strategy for block betting and the maximum value loss that is supported, we anticipate that the convergence process tends to select one of them (the convergence parameter here is basically consistent with the minimum benefit proof).
When the new block finalization process starts, we expect the initial odds to be very low, which indicates the validator's fear of a short-range fork, but as time goes by, the odds of the verifier's willingness to bet will gradually increase. In particular, if they see other validators betting on the block for high odds, the authenticator's bets will also increase. It can be expected that the value loss of the block will be multiplied by the number of points, thus achieving the maximum "total margin loss" within the logarithmic time.
In the extra data for the chunk header, we now change the desired data format to the following format:
<vchash><randao><blockhash><logodds><sig>
Where the chunk hash is the hash of the last bet chunk, and the Logodds is a 1-byte value, which represents the logarithm of the odds (i.e., 0 corresponds to 1,8 corresponding to 2,16 corresponding to 4, etc.).
Please note that we cannot allow the verifier to set the odds completely freely. Consider the following scenario, if there are two targets for the finalization of a competitive relationship, B1 and B2 (i.e. there are two chains, one of the finalization_target is set to B1, the other finalization_target is set to B2), and a consensus is formed around B1, Then a malicious authenticator may suddenly put a high odds bet on B2, whose value loss is sufficient to influence consensus, thus triggering a short-range fork. Therefore, we use the following rules to limit the odds by restricting V_loss:
- Make the V_loss_ema an exponential moving average, set as follows. V_loss_ema begins to be equivalent to block bonuses. For each chunk cycle, the V_loss_ema is set to V_loss_ema * (V_loss_maxgrowth_factor-1-skips)/V_loss_maxgrowth_factor + V_loss, Where skips is the number of skipped, V_loss is the V_loss of the block selection.
- Set V_loss_max to V_loss_ema * 1.5. The value of the qualified V_loss is the value.
This rule is designed to introduce a security constraint: when at least (sample) Two-thirds of the other verifier risk is x, the current authenticator risk can only be 1.5x. This is similar to the prior commitment or commitment pattern of the Byzantine fault-tolerant consensus algorithm, where a validator should wait for the other two-thirds validators to complete the given steps before proceeding to the next step, ensuring a certain degree of security, and ensuring that most collusion cannot participate in "malicious sabotage" attacks (e.g., allowing other validators to add a large amount of bets to a chunk , and then promote consensus to a different block), cannot conspire, because collusion itself requires a lot of money (in fact, conspiracy to lose money faster than the victim's loss of money, this is a significant feature, because it ensures that in most hostile situations, malicious operators will often be " Eliminated. " )
If a chunk is added to the chain as a dunkle, the bets are processed and fines and bonuses are also generated. For example, if there are two blocks with a height of 5000 A1 and A2, each competing target, and two blocks with a height of 5050 B1 and B2 (two blocks all with A1 as the previous block), and the validator builds block C on the B1, bets on A1, and then, if B2 is finally determined to be the main chain block, B1 and C will become Dunkle, and C will be punished for B1 vs B2, but will still be rewarded for A1 being gambled.
However, suppose V_loss in C is such that V_loss < V_loss_max if B1 is included, B2 > V_loss if V_loss_max is included. Later, in order to guarantee the expected value loss feature, we made an additional fine: even though the verifier was right, we still punished them with V_loss-v_loss_max. Therefore, we effectively decompose the v_loss into (i) the value loss V_loss_max bet and (ii) the pure value destruction of the V_loss-v_loss_max, thus guaranteeing that the bet of this size is still only transformed by the V_loss_max through the fork selection rule. This means that betting is not "complete" in a sense, because if many of the chunks in the chunk are forked, even if the chunk has been finalized, the bet on a block may still cause a penalty. The mere loss of value in the betting model is considered an acceptable trade-off in the purity of the profit of the value Loss fork selection rule.
Scoring and strategy implementation
The value loss scoring can be achieved through the following algorithms:
- Keep track of the latest hammered chunks. If there are multiple hammered chunks that are incompatible, return a large red blinking error, as this indicates that a finalized reversal event has occurred, and the client's user may need to use an additional chain source to determine what happened.
- Keep track of all finalized candidate blocks, that is, the chunk's descendants. For each candidate block, keep track of the value loss of the candidate block.
- Keep track of the longest chain of each candidate block and its length, starting with the closest hammered chunk.
- The "total weight" of the chain is the value loss of the ancestor of its finalized candidate block plus the length of the chain multiplied by the block reward. If no candidate blocks are finalized in the chain, then the length of the chain is multiplied by the chunk reward as its total weight. The "Chain head" is the most recent chunk of weight in the chain.
<ignore_js_op>
Here are examples of v_loss; in reality, they will not be allowed to grow so fast, and B or C will require the higher v_loss on a to be the candidate block.
A simple verifier strategy is to create chunks only in the head and make a hammered bet where the value loss is 80% of the maximum value loss described.
Light Client Synchronization
The finalization mechanism opens the door to a fast and light client synchronization algorithm. The algorithm includes the following steps:
<ignore_js_op>
- Make X the most recent state you have confirmed (initially the Genesis State).
- Check the current finalization target of the network after X-period or X-period (remember: When the agreement considers the previous block of the current block to be finalized, the target is the block at this time. )。 Call the latest finalize target FN and the previous finalize target FP.
- Query the K blocks before the network FN. These blocks will bet on their entire etheric pool on the FP.
- By querying the status you have finalized with the Merkle branch, validating the blocks created by the authenticator, verifying their presence and location in the authenticator collection, and verifying the correctness of their selection against the initial state of the first chunk in the K-block.
- Sets the post state of X as FP.
- Repeat the above steps until you get the latest hammered chunks. Starting with the latest hammered chunks, use the normal strategy above to find the chain head.
It is worth noting that steps 1 through 5 can be used to validate all-day chunks with two network requests and a few seconds of computation.
Sharding
Now, let's consider expanding from one shard to multiple shards. The model we build is as follows. Instead of the existing single blockchain, we now have multiple blockchain links that we call "shards". The number of num_shards shards is shard 0 to Shard Num_shards–1, where Shard 0 is simple as a conventional stock proof blockchain, and its certainty as described above, but Shard 1 ... Num_shards–1 work mechanism is different. At the beginning of each period, the Validators_per_shard are randomly selected for each shard, and are assigned to the verifier of the next period (e.g., the authenticator of the n+1 period is indicated during the N-period). When Getvalidator (skip) is called to determine the verifier of a shard within these shards, only randomly selects a validator from the selected authenticator collection (equally allocated, since the margin size is weighted at the time of selection). Shard 1 ... Num_shards–1 's deterministic bets are not made within shards, but are made within Shard 0. When the bet is made, it is stored and the bet is processed only after the end of the descendant chunk cycle (for example, the block in the n+1 period determines that the assertion will be processed within Shard 0 at the beginning of the n+3 block cycle).
<ignore_js_op>
If a validator has been picked for a shard, the authenticator will need to invoke the Registerforshard of the Casper contract (BYTES32 vchash,uint256 shard,uint256 index,bytes32 Randao) function, where Vchash is the validator's validation code hash, SHARD is a shard Id,index is a numeric value and 0 <= index < Validators_per_shard, where Getshardvalidator (uint256 shard,uint256 Index) returns the given validation code hash, and Randao is a randao commitment. To boot the authenticator, the function generates a receipt that can be confirmed on the target shard by using Confirmreceipt (uint256 receiptid).
Getshardvalidator relies on a single random source, and the business logic is similar to Getvalidator. The random source, according to the following steps to obtain:
- Each period, each k, if K satisfies 0 <= K < 24, keeps track of the total number of bits 1 of the Globalrandao last K, minus the number of times K bit is 0.
- At the end of each period, the calculation Combinedrandao as follows. Each k, if K satisfies 0 <= K < 24, and if in the period of time, the last K bit of Globalrandao is 1,combinedrandao the last bit of the K more times is 1. If the last K bit of Globalrandao is more numerous, the bit of 0,combinedrandao K is 0. Use SHA3 (Combinedrandao) as a random source.
Using the low-impact function of the Iddo Bentov increases the cost of manipulating the random source, because this particular random source seed produces substantial economic results, so it is larger than the normal maneuvering target.
Cross-shard hammered bets are not on the chunk header, so there is no undue hindrance to the light client; instead, the authenticator creates a trade between any chunks they create and calls a Registerfinalitybets (bytes32[] Hashes,bytes LOGODDS) function, create an expected num_shards hash in any chunk and a byte array of length num_shards, each byte representing the odds of the corresponding chunk hash.
The typical workflow for a validator is to maintain a "full node" of Shard 0, and keep track of future shards assigned to them. If an authenticator is assigned to a shard, they will use the Merkle tree to prove the status of the download, and make sure that they have downloaded the status when they need to start validating. For that period, they act as validators for the Shard and create chunks, and at the same time, they will make bets on all shards by observing (i) the longest chain on each Shard, (ii) the other verifier's hammered bets, and (iii) trying to reach 51% in the patch area Various two heuristic methods and mechanisms for successful attacks (e.g., fraud proof). Note that the probability assigned to any given shard is proportional to the cumulative etheric of the authenticator, so if the authenticator's assets multiply, the calculation that needs to be processed doubles. This feature is considered to be what it means because it increases fairness and lowers the pool incentive, and introduces a principle that processing transactions and storing the blockchain itself becomes a form of "workload mix proof".
The sampling mechanism was originally intended to ensure that the system relied on only a small number of validators for actual transaction verification, while the system was able to safely respond to an attacker who accumulated an Ethernet margin of up to ~33-40% (less than 50% because the total cumulative ether was a "lucky" attacker in some given Shard), because the sampling was random. , attackers cannot choose to focus their interests on a single shard, a fatal flaw in many of the workload fragmentation scenarios. Even if a shard is attacked, there is a second line of defense: If other validators find evidence of the attack, they can refuse to make a finalization statement that follows the attacker's fork and confirm the chain created by the honest node. If an attacker on a shard tries to create a chain from an invalid chunk, the authenticator of the other shard can detect it and then temporarily fully validate the nodes on that shard and make sure that they only finalize valid chunks.
Cross-Shard Communication
Cross-shard Communication in this scenario follows the following work. We create a ethlog opcode (with two parameters: to value), which creates a log record whose stored content is an empty string (note that the empty string, not 32 0 bytes, a traditional log can only store 32 bytes of string), its data is a 64-byte string, Include targets and specific values. The Getlog opcode we create requires a separate parameter that is defined by the chunk ID. Number * 2**64 + txindex * 2**32 + logindex (where Txindex is the index of the transaction, including the Log,logindex of the block is the log index in the transaction receipt) trying to get the specified storage content, The logs stored in the state indicate that the log has been consumed and that the log data is placed in the destination array. If the log value is an empty string, this will also transmit the ether to the recipient. In order to successfully get the log contents, the transaction calling the opcode must carry the log ID parameter. If v=0, we allow the R value in the signature to be reused for this purpose (note: This means that only EIP 86 transactions can be used here; we hope that by now, the EIP 86 deal will be the main form of the deal).
Now the abstract consensus is no longer a single chain, but a chain of the set, C[0]...c[num_shards-1]. The state transfer function is no longer a state_k, but the STF (block, r_c[0]...r_c[num_shards-1])->state_k, where
- R_c[i]
The copy code is a receipt collection of I chains from past async_delay chunks.
Note that there are several ways to "meet" this abstraction. One approach is to "each node is a full-featured node": All nodes store the state of all shards, update the chain of all shards, and therefore have enough information to calculate all the channel trading functions. However, this is tedious because it is not extensible.
A more interesting strategy is the "medium node" approach: Most nodes select some shards, keep them up to date (including Shard 0 as much as possible), and serve as light clients for all other shards. When calculating the channel trading functions, they need the old transaction receipts, however, they do not store the old transaction receipts, and for this reason we have added a network protocol rule that requires the transaction to be accompanied by a merkle proof of the receipt of any trading static reference (here's why the static reference is clear: otherwise, Any getlog operation created at run time, because of network latency, the log read data becomes a slow process that will take several times, and the customer stores the burden of all historical logs locally is too heavy). Ultimately, the strategy deployed within the primary network will likely be a full-node strategy with the initial forced receipt Merkle proof, which, over time, is loosely supporting a growing number of medium-sized nodes.
Note that you do not need to merkle proof as a packet, import from one Shard directly to another Shard, instead, all of the proof transport logic is done at the authenticator and client level, and an interface is implemented at the protocol level to access Merkle through the interface. Long time Async_delay reduces the likelihood of a reorganization within a shard, which will require a centralized reorganization of the entire channel.
If a short delay is required, a mechanism that can be implemented on top of the protocol is the intra-shard betting market, for example, within the Shard J, a can bet with B, for example "if Block X is finalized in Shard I, B agrees to send 0.001 ETH to a, whereas if in Shard I, block X is not finalized, Then a agrees to send the ETH "to B". For this purpose, the Casper margin can have dual use-even if the bet is within the Shard J, the information of a gambling loss will be transmitted via receipt to Shard 0, and then once A is extracted, Shard 0 will transfer 1000 ether to B. B will therefore be trusted: a block sufficient to be sure that the other shards will be finalized and bets made accordingly, and B will also receive an insurance against the failure of a judgment (even if a dual-use scheme is used, the insurance is flawed and if a is malicious, they will lose all their bets so that B is not getting anything). An extensible limit exists for this scheme, which is proportional to the square of the computing power of a node. There are two reasons, first, the total number of compute capacity proportional to the Shard quantity must be calculated on the Shard 0 bonus. Second, all clients must be light clients for all shards. Therefore, assuming that the compute power of a node is N, there should be O (n) shards and each shard has an O (n) processing capacity of O (n^2). Exceeding this maximum, a more complex shard protocol will be required to form the declaration validation of some kind of tree structure, which is beyond the scope of this article. |