Original title: "Serenity" Tranquility--The Choice and Philosophy of ETH2.0 Beacon Chain
Original Author: Steven | Chain Hill Capital Co-Founder&Partner
This article was originally created by Steven, a partner of Chain Hill Capital and head of Alpha Strategy. Unauthorized reprinting is strictly prohibited. For reprinting, please contact the official account of Chain Hill Capital. The following is the main text:
Since the Ethereum development team announced the start-up conditions of ETH2.0 and announced the deposit address on October 4th, various ETH in-depth participants have sacrificed their own 32 or more ETH to support its start-up, and finally in October On the 24th, Vitalik overfulfilled the start-up requirement of at least 16,384 32-eth validator deposits with a total of no less than 524,000 ETH. The ETH2.0 mainnet was launched on time in the early morning of December 1st.
secondary title
1.0 Principles Serenity's Design Philosophy
Simplicity Simple
Based on the inherent complexity of Proof of Stake and Sharding technology in cryptoeconomics, Serenity is designed to be as simple as possible in order to: 1) minimize development costs; 2) reduce the risk of unforeseen security issues; 3) Make it easier for future developers to explain the details and legality of the protocol to their users when designing the protocol. (As for the third point, when the complexity of some protocols is unavoidable, the order of priority should be obeyed: Layer 2 Protocol > Client Implementation > Protocol Specification)
Long-term stability Long-term stability
The construction of the underlying protocol must be perfect and predictable enough, so that no changes will be required for the next 10 years or longer, and any innovation can be based on these underlying layers and occur at a higher protocol layer and building.
Sufficiency
Serenity will fundamentally ensure that as many applications as possible can be built on top of her protocol.
Defense in depth Defense in depth
Protocols can work indiscriminately under every possible security assumption. (For example, network latency, failure counts, and nefarious motives of users.
Full light- client verifiability All light nodes can be verified
secondary title
2.0 The Layer1 VS Layer2 The trade-off between the Layer 1 protocol and the Layer 2 protocol
In any blockchain protocol, there is a debate whether to put more features in Layer 1, or to design Layer 1 as simple as possible and build more features on Layer 2.
Among them, the reasons for supporting Layer2 include:
a. Reduce the complexity of the consensus layer
b. Reduces the need to modify the consensus layer
c. Reduce the risk of consensus layer failure
d. Reduce the load and political risk of protocol governance
e. Will become more flexible and able to implement new ideas over time
The reasons for supporting Layer1 are:
a. Reduce the risk of stalled development due to lack of mechanism to force everyone to upgrade to a new protocol (hard fork)
b. Potentially reduce the complexity of the overall system
c. If the first layer is not strong enough, it is impossible to build such a complex and huge mechanism on the second layer protocol (just like you will never have a way to build Ethereum on the Bitcoin network)
Most of the content of Ethereum 2.0 is a careful trade-off between Layer 1 and Layer 2. The efforts made on Layer 1 include the following three points:
1) Quasi-Turing-complete, full-state code execution
2) Scalable and Computable
3) High-speed block completion time
in particular:
Without 1), you cannot use a complete trusted model to build Layer 2 applications;
Without 2), scaling performance will be limited to certain state channels and certain technologies like Plasma, which often face the problem of capital lock-up and large-scale capital exit;
If there is no 3), the requirement of timely transactions without state channels cannot be realized, and this will also cause problems of fund locking and large-scale fund withdrawal;
In addition to the above features, ETH2.0 leaves 1) privacy, 2) high-level programming language, 3) scalable state storage, and 4) signature schemes to Layer 2, because they are all areas of rapid innovation, and many existing schemes are With different characteristics, it is inevitable to make trade-offs between more and better solutions in the future. For example:
1) Privacy: ring signature + secret value VS Zk snark VS Zk starks; rollup VS ZEXE VS ...
2) High-level programming languages: declarative versus imperative, syntax, formal verification features, type systems, protection features, and native support for privacy features
3) Expanded state storage: account VS UTXOs, unavailable lending scheme, raw Markle branch witness VS Snark/Stark compression VS RSA accumulation, spares Markle trees VS AVL trees VS usage-based imbalanced trees;
secondary title
3.0 Why Casper Why choose Casper as the POS solution
There are currently three mainstream POS consensus algorithms:
a. Nakamoto-inspired, such as Peercoin, NXT, Ouroboros…
b. PBFT-inspired, such as Tendermint, Casper FFG, Hotstuff
c.CBC Casper
In the latter two schemes there is a question of whether and how to use Security deposits and Slashing (the first scheme is incompatible with Slashing). All three schemes are better than proof of work, and we will introduce the ETH2.0 approach in detail.
Slashing
The Slashing mechanism used by Ethereum 2.0 means that when a verifier is found to have misbehavior, the tokens pledged to act as a verification node in the network will be confiscated. In the best case, about 1% of the verifiers will be punished , the worst case is that all ETH pledged by the entire network will face punishment. The meaning of this approach is that:
1) Increase the cost of attack
2) To overcome the problems of verifiers, the biggest motivation for verifiers to deviate from honest behavior is laziness (sign all transactions without verification), and large penalties for self-contradictory and incorrect signatures can solve this problem to a large extent. Regarding this point, there is a very typical case: in July 2019, a verifier on cosmos was fined for signing two conflicting blocks, and the reason for this verifier’s mistake was Simply because it runs a master node and a backup node at the same time (to make sure one of them goes offline doesn't prevent them from getting rewarded), and both nodes are turned on outside of the same time, causing them to end up contradicting each other.
Choice of Consensus Algorithm
In the presence of large-scale verification nodes doing evil (1/3inBPFT-inspired, 1/4in CBC), only the BFT-inspired and CBC consensus algorithms can have better finality, and the Nakamoto-inspired consensus algorithm cannot be used on this premise. achieve finality. The final confirmation requires that most of the verification nodes are online. This requirement also needs to be met in the Sharding sharding mechanism, because sharding requires that 2/3 of the random verifiers must sign when communicating across shards.
secondary title
4.0 Sharding - why ETH2.0 hates super nodes
For Layer1, the main method of sharding is to use super nodes - by requiring each consensus node to have a super server to ensure that they can process each thing individually. The expansion based on super nodes is very convenient because it is very simple to implement: it just adds some more parallel software engineering work on the basis of the existing blockchain working method.
For this approach, the main problems faced are as follows:
1) Risk of staking pool centralization: The fixed cost of running a node is high, so few users can participate. If the fixed costs of running a validator account for the majority of the rewards, then larger pools will save smaller fees than smaller pools, which will keep smaller pools being squeezed out, exacerbating the centralization trend. In comparison, in a sharding system, larger nodes with more ETH mortgaged need to verify more transactions, so their fees are not fixed.
2) AWS centralization risk: under the super node system, the mortgage form of the family workshop almost does not exist, and most of the mortgage will be in the cloud computing environment, which will greatly increase the risk of single point failure .
3) The problem of scalability: As the transaction throughput increases, the above risks increase, and the increased load in the sharding system can be handled more easily to reduce the above risks.
These centralized distributions are also why ETH2.0 did not choose to pursue ultra-low latency (<1s)< span="">For the reason, they set this delay at a relatively conservative data.
secondary title
5.0 Security Model
ETH2.’s defense in depth and sharding approach is to combine random committee sampling to achieve validity and availability under the honest majority model, while providing proof of custody to prevent lazy actors, and providing proof of fraud and proof of data availability, In order to detect invalid or unusable chains without downloading and verifying all data; this will allow clients to reject invalid or unusable chains.
secondary title
6.0 How is the reward mechanism of Casper designed?
In each round of Epoch, each verifier will give its own proof, which means that the verifier points out which is the block header and signs it. If this proof is packaged, the verifier will receive the following rewards:
1) Proof of being packaged rewards
2) Specify the reward for the correct Epoch checkpoint
3) Reward for specifying the correct chain head
4) Rewards for proving that they were quickly packaged on the chain
5) Specify the reward for the correct shard block
In different scenarios, the specific return calculation method is as follows:
B = Base reward, P = Proportion of validators making correct judgments
Any verifier who makes a correct judgment will be rewarded with B*P,
The calculation formula of B is:
secondary title
7.0 Beacon chain/shard chain structure
The sharding system consists of 64 logical shards "shards", and the system coordinates all activities around the beacon chain.
The process for a transaction to be finally confirmed in this system is as follows:
1) The transaction is included in a shard block of a group of shards
2) The randomly selected verification committee is assigned to this shard for verification and signature
3) The signature of the committee is packed into the next beacon block
4) The next beacon chain is finalized through Casper FFG
Connect each shard block to the next beacon chain through hash, so that shards can quickly identify each other's Markle root, so that they can mutually verify receipts:
secondary title
8.0 Tail---About the future of ETH2.0
"Tranquility" is only the first step in the vast journey of ETH2.0, but from the trade-offs they made in the first step, it can be seen that the entire team has thought deeply about fairness and efficiency in the past three years, and has not blindly pursued the so-called hundred 10,000 TPS, but on the premise of ensuring safety, carry out greater practice on practicability and feasibility.
I believe that for a long time in the future, ETH will be the cornerstone of the entire blockchain network. This year's Defi is just a great social practice under immature network conditions. 2.0 will become more and more perfect in 2021-2022 When the time comes, greater commercial value and social value will inevitably emerge.
About Chain Hill Capital
Since its establishment in 2017, Chain Hill Capital (Qianfeng Capital) has focused on the value investment of global blockchain projects. It has created early-stage and growth-stage equity investments and encrypted digital asset investment matrices of Alpha Strategy and Beta Strategy. Global resource relationship network, strategic layout of Chicago, New York, Tokyo, Beijing, Shanghai, Shenzhen, Hong Kong, Xiamen and other city nodes. With a wealth of overseas investment institutions and a global high-quality project resource pool, it is an international blockchain venture capital fund.
Supported by a professional team with multicultural backgrounds, members of the core departments - Investment Research Department, Trading Department, and Risk Control Department are all from well-known universities and institutions at home and abroad. They have a solid financial background, excellent investment research capabilities, and a keen sense of the market Sensitive ability, highly awe of the market and risks. The investment research department combines rigorous basic research with mathematical and statistical models to obtain investment strategies such as "Pure Alpha" and "Smart Beta".
