Risk Warning: Beware of illegal fundraising in the name of 'virtual currency' and 'blockchain'. — Five departments including the Banking and Insurance Regulatory Commission
Information
Discover
Search
Login
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt
BTC
ETH
HTX
SOL
BNB
View Market
Jump Crypto: Detailed Explanation of Various Blockchain Expansion Solutions
链捕手
特邀专栏作者
2022-04-01 09:53
This article is about 7214 words, reading the full article takes about 11 minutes
The ability to efficiently scale a blockchain is a key factor in determining the future success of the crypto industry.

Related Reading

Original title: "A Framework for Analyzing L1s

Compilation of the original text: Hu Tao, Chain Catcher

Related Reading

introduce

Jump Crypto: How to Build a Layer 1 Analysis Framework

introduce

articlearticleIn, we develop a framework for analyzing L1, especially given the numerous new chains that have been built recently. We also briefly note that the motivation behind many of these novel L1s is primarily focused on finding solutions to blockchain scalability. Let's take a closer look at some of these solutions. In this article, we aim to:

  • Provides an overview of various Layer 1 and Layer 2 scaling solutions.

  • Analyze and compare these different solutions along some core dimensions.

  • The scalability trilemma

The scalability trilemma

In early 2017 thea blog postIn, Vitalik Buterin proposes the scalability trilemma, referring to three main properties that define the viability of blockchain systems: (1) decentralization; (2) security; and (3) scalability.

Of these three, we believe that scalability remains the hardest problem to solve without unduly compromising the other two pillars. Security and decentralization remain critical to the performance of these systems, but as we will see later, addressing the challenges of scaling distributed systems also provides key breakthroughs for decentralization and security, driven by Very fundamental reason. We therefore emphasize that the ability to efficiently scale blockchains will be a key determinant of the future success of the crypto industry more generally.

Broadly speaking, there are two main categories of scaling: Tier 1 and Tier 2. Both are related and critical to increasing the throughput of a blockchain, but focus on different aspects or even layers of the Web3 stack. Scaling has certainly gotten a lot of attention over the past few years, and is often touted as the key path to mass adoption of blockchain technology, especially as retail usage continues to climb and transaction volumes increase.

Layer 1 (L1s)

Tier 1 Few major scaling architectures stand out:

  • state sharding

  • parallel execution

  • Validity certificate

  • Validity certificate

state sharding

There are many kinds of sharding, but the core principles always remain the same:

  • Sharding distributes the cost of verification and computation, so each node does not need to verify each transaction.

  • Nodes in a shard, just like in a larger chain, must: (1) relay transactions; (2) validate transactions; and (3) store the state of the shard.

  • Shard chains should preserve the security primitives of non-shard chains through: (1) efficient consensus mechanisms; (2) security proofs or signature aggregation.

Sharding allows a chain to be split into K different independent subnetworks or shards. If there are a total of N nodes in the network, then there are N/K nodes operating each of the K subnets. When a set of nodes in a given shard (say K_1) validates a block, it gives a proof, or set of signatures, that the subnetwork is valid. Then all other nodes, S-{K_1} need to do is verify the signature or proof. (Validation usually takes much less time than re-running the computation itself).

herehereA more in-depth technical explanation by Vitalik can be found here. Sharding is the most prominent basic component in the development roadmap of Ethereum 2.0 and in recent years.

parallel execution

Sharding and parallel execution are similar in many ways. While sharding attempts to validate blocks in parallel on different subchains, parallel execution focuses on separating the work of processing transactions for individual nodes. The effect of this architecture is that nodes can now process thousands of contracts in parallel!

articlearticleconsensus model

consensus model

Consensus is at the heart of Layer 1 blockchain protocols - for transactions/data to be finalized on-chain, participants in the network need a way to mutually agree on the state of the chain. Consensus is therefore a means of ensuring the consistency of the shared state as new transactions are added and the chain progresses. However, different consensus mechanisms can also lead to fundamental differences in the key metrics we use to measure blockchain performance: security, fault tolerance, decentralization, scalability, and more. However, the consensus model alone cannot determine the performance of a blockchain system. Different consensus models are suitable for different scaling mechanisms, which can ultimately determine the efficacy of a particular network.

Layer 2 (L2s)

Fundamentally, layer 2 scaling is based on the premise that resources on layer 1, whether compute or otherwise, become prohibitively expensive. To reduce costs for users, services, and other community participants, the heavy computational load should be moved off-chain (Layer 2), while still attempting to preserve the underlying security guarantees provided by cryptographic and game-theoretic primitives on Layer 1 ( Public-private key pairs, elliptic curves, consensus models, etc...)

This articleThis articleProposed in: Allow the creation of an unlimited number of side chains, and then use fraud proofs (PoW, PoS) to complete transactions on layer 1.

Rollups (what are they good for)

Rollup is also a way to move computation off-chain (layer 2) while still recording messages or transactions on-chain (layer 1). Transactions that would otherwise be recorded, mined and verified at layer 1 are recorded, aggregated and verified at layer 2, and then published to the original layer 1. This model achieves two goals: (1) freeing up computing resources at the base layer; (2) still preserving the underlying cryptographic security guarantees of layer 1.

  • Transactions are "aggregated" and then passed to collection box contract transactions sorted by the Sequencer

  • Contracts stored on L2 execute off-chain contract calls

  • The contract then sends the Merkle root of the new state back to the L1 chain as calldata

Optimistic Rollup

Validators will publish transactions to the chain on the a priori assumption that they are valid. Other validators can challenge transactions if they so choose, but certainly don't have to. (Think of it as an innocent until proven guilty model). However, once a challenge is initiated, two parties (such as Alice and Bob) are forced to participate in a dispute resolution protocol.

At a high level, the dispute resolution algorithm works as follows:

  1. Alice claims that her assertion is correct. Bob disagrees.

  2. Alice then divides the assertion into equal parts (for simplicity, assume this is a bisection)

  3. Bob then has to choose which part of the assertion (say the first half) he thinks is false

  4. Run steps 1 - 3 recursively.

  5. Alice and Bob play this game until the size of the sub-assert is just one instruction. Now, the protocol only needs to execute this instruction. If Alice is correct, then Bob loses his stake, and vice versa.

available hereFind a more in-depth explanation of the Arbitrum Dispute Resolution Agreement.

In the Optimistic case, the cost is small/constant O(1). In the case at issue, the algorithm runs in O(logn), where n is the size of the original assertion.

A key consequence of this Optimistic verification and dispute resolution architecture is that Optimistic Rollups have an honest party guarantee, meaning that for the chain to be secure, the protocol only needs one honest party to spot and report fraud.

Zero-Knowledge Rollups

In many blockchain systems and layer 1s today, consensus is achieved by effectively "re-running" transaction calculations to validate state updates to the chain. In other words, to complete a transaction on the network, nodes in the network need to perform the same computation. This might seem like a naive way to verify the history of the chain — and it is! The question then becomes, is there a way to ensure that we can quickly verify the correctness of transactions without having to replicate computations across a large number of nodes. (For those with some background in complexity theory, this idea is at the heart of P vs. NP) Well, yes! This is where ZK rollups come in handy - in effect, they ensure that the cost of verification is significantly lower than the cost of performing the computation.

Now, let's take a deeper look at how ZK-Rollups achieve this while maintaining a high level of security. The following components are included in the high-level ZK-rollup protocol:

  • ZK Validator- Proof of verification on-chain.

  • ZK Prover- Fetch data from an application or service and output proofs.

  • On-chain contracts-Track on-chain data and verify system status.

There have been a plethora of zero-knowledge proof systems, especially in the last year. There are two main classes of evidence: (1) SNARKs; (2) STARKs, although the line between them is getting blurrier every day.

We won't go into the technical details of how the ZK proof system works now, but here's a nice diagram of how we can get something akin to an efficiently verifiable proof from a smart contract.

speed

speed

privacy

privacy

ZK proofs are inherently privacy-preserving because they do not require access to the underlying parameters of a computation to verify it. Consider the following concrete example: Suppose I want to prove to you that I know the combination of a lock and a box. A naive approach would be to share the combination with you and ask you to try to open the box. If the box is opened, then obviously I know this combo. But suppose I have to prove that I know the combination without revealing anything about the combination. Let's design a simple ZK-proof protocol to demonstrate how it works:

  • I ask you to write a sentence on a piece of paper

  • I hand you the box and tell you to tear the paper through a small slit in the box

  • I turn my back on you and punch the combo into the box

  • I opened the note and returned it to you.

  • You confirm that the note is yours!

That's it! A simple zero-knowledge proof. Once you confirm that the note is actually the same as the one you put in the box, I have shown you that I am able to open the box and therefore know the combination of boxes a priori.

In this way, zero-knowledge proofs are particularly good at allowing one party to prove the truth of a statement to the other party without revealing any information that the other party would not have.

EVM Compatibility

EVM Compatibility

The Ethereum Virtual Machine (EVM) defines a set of instructions, or opcodes, for implementing basic computer and blockchain-specific operations. Smart contracts on Ethereum compile to this bytecode. The bytecode is then executed as an EVM opcode. EVM compatibility means that there is a 1:1 mapping between the instruction set of your running virtual machine and the EVM instruction set.

The largest layer 2 solutions on the market today are built on top of Ethereum. When Ethereum-native projects want to migrate to Layer 2, EVM compatibility provides a seamless, minimal-code scaling path. Projects just need to redeploy their contracts on L2 and bridge their tokens from L1.

bridging

bridging

Because L2 are separate chains, they do not automatically inherit native L1 tokens. Native L1 tokens on Ethereum must be bridged to the corresponding L2 in order to interact with dApps and services deployed there. The ability to connect tokens seamlessly remains a key challenge, with different projects exploring various architectures. Usually, once a user calls depositL1, an equivalent token needs to be minted on the L2 side. Designing a highly general architecture for this process can be particularly difficult because of the wide range of tokens and token-standard-driven protocols.

Finality

Finality refers to the ability to confirm the validity of on-chain transactions. At layer 1, when a user submits a transaction, it's almost instantaneous. (Though it takes time for nodes to process transactions from the mempool). On layer 2, this is not necessarily the case. State updates submitted to layer 2 chains running the Optimistic Rollups protocol will first assume the updates are valid. However, if the validator submitting this update is malicious, there needs to be enough time for an honest party to challenge the claim. Typically, this challenge period is set to about 7 days. On average, users who want to withdraw funds from L2 may have to wait about 2 weeks!

ZK Rollups, on the other hand, do not require such a large challenge period because each state update is verified using a proof system. Therefore, transactions on the ZK Rollups protocol are as final as transactions on the underlying layer 1. Not surprisingly,The instant finality provided by ZK Rollups has become a key advantage in the battle for L2 scaling dominance.

Some argue that while Optimistic Rollups do not necessarily guarantee fast finality at L1, fast withdrawals offer a clear, easy-to-use solution by allowing users to access their funds before the challenge period ends. While this does provide a way for users to access their liquidity, there are several issues with this approach:

  • Additional overhead for maintaining liquidity pools for L2 to L1 withdrawals.

  • Quick withdrawals are not universal - only coin withdrawals are supported. Arbitrary L2 to L1 calls cannot be supported.

  • Liquidity providers cannot guarantee the validity of transactions until the end of the challenge period.

  • Liquidity providers must: (1) trust those they provide liquidity, limiting the benefits of decentralization; (2) construct their own fraud/validity proofs, effectively countering leveraging the fraud proofs/consensus built into the L2 chain purpose of the agreement.

Sequencing

A sorter is like any other full node, but has arbitrary control over sorting transactions in the inbox queue. Without this ordering, other nodes/participants in the network cannot determine the outcome of a particular batch of transactions. In this sense, this provides users with a level of certainty when executing transactions.

herehereandhereTake comfort in the fact that a lot of work/research is being done on decentralized fair sorting.

capital efficiency

capital efficiency

Another key point of comparison between Optimistic Rollups and ZK Rollups is their capital efficiency. As mentioned earlier, Optimistic L2 relies on Fraud Proofs to secure the chain, while ZK Rollups utilize Proofs of Validity.

The security provided by fraud proofs is based on a simple game theory principle: the cost to an attacker of trying to fork the chain should exceed the value they are able to extract from the network. In the case of Optimistic Rollups, validators stake a certain amount of tokens (e.g. ETH) on Rollup blocks that they believe will be valid as the chain progresses. Malicious actors (those found guilty and reported by honest nodes) will be fined.

Thus, there is a fundamental trade-off between capital efficiency and safety. Improving capital efficiency may require reducing latency/challenge periods while increasing the likelihood that fraudulent assertions will not be detected or challenged by other validators in the network.

Moving the lag period is equivalent to moving along the capital efficiency vs. lag period curve. However, as the latency period changes, users need to consider the impact on the trade-off between safety and finality - otherwise they will be indifferent to the changes.

The current 7 delay periods for projects like Arbitrum and Optimism are determined by the community taking these aspects into account.This isEd Felten from Offchain Labs has an in-depth explanation of how they determined the optimal length of the delay period.

first level title

App-specific chain/extension

When we talk about a multi-chain future, what exactly are we referring to? Will there be a large number of high-performance layer 1 with different architectures, more layer 2 scaling solutions, or just a few layer 3 chains with custom optimizations for custom use cases?

Our belief is that the demand for blockchain-based services will be fundamentally driven by user demand for a particular type of application, whether it be NFT minting or a DeFi protocol for lending, staking, etc... in the long run with As with any technology, we expect that users will want to abstract away from the underlying primitives (in this case, L1 and L2 that provide the core infrastructure for settlement, scalability, and security).

Foray into Blockchain Scalability

Application-specific chains provide a mechanism to deploy high-performance services by leveraging narrow optimizations. As such, we expect these types of chains to be a key component of the Web3 infrastructure designed to drive mass adoption.

There are two main ways in which these chains appear:

  • Separate ecosystems with their own primitives focus on very specific applications.

  • An additional layer built on top of existing L1 and L2 chains, but fine-tuned to optimize performance for specific use cases.

Flexibility and Ease of Use

  • Flexibility and Ease of Use

  • highly composable

  • Liquidity aggregation and access to native assets

Next-generation scaling infrastructure must strike a balance between these two approaches.

Fractal expansion method

How does it work?

How does it work?

  • Transactions are split among local instances based on the scenarios they are intended to serve.

  • Leverage the security, scalability, and privacy properties of the underlying L1/L2 layer while optimizing for unique custom needs

  • Utilizes a novel architecture (for storage and computation) based on proof-of-proof and recursive proof

  • Any message is accompanied by an idea that justifies the message and the history leading up to it

This isA great article by Starkware discusses the architecture of fractal scaling.

end thoughts

Blockchain scaling has become more prominent over the past few years, and for good reason — the computational cost of validating on a highly decentralized chain like Ethereum has become infeasible. With the popularity of the blockchain, the computational complexity of transactions on the chain is also growing rapidly, further increasing the cost of securing the chain. Optimizations to existing layer 1 and architectures such as dynamic sharding can be very valuable, but the dramatic increase in demand requires a more nuanced approach to developing secure, scalable, and sustainable decentralized systems.

We believe this approach is based on building chain layers optimized for specific behaviors, including general-purpose computation and privacy-enabling logic for specific applications. Therefore, we see Rollup and other layer 2 technologies as central to scaling throughput by enabling off-chain computation/storage and fast verification.

refer to

refer to

Jump Capital
Layer 2
Welcome to Join Odaily Official Community