Original source: Delphi Digital
Original source: Delphi Digital
Author of Compilation: Free and Easy-The Way of the Metaverse
Highlights of the report:
Highlights of the report:
The monolithic chain is limited by the content that can be processed by a single node, while the modular ecology overcomes this limitation and provides a more sustainable form of expansion;
A key motivation behind modularity is efficient resource pricing. Modular chains can provide more predictable fees by separating applications into different resource pools (i.e. fee markets);<>However, modularity introduces a new problem called Data Availability (DA), which we can solve in several ways. For example, Rollup can process off-chain data in batches and submit them to the chain. By providing "data availability" on the chain, they overcome this problem and inherit the underlying security of the base layer, establishing a trustless L1
L2 communication;
The latest form of modular chaining, called the Dedicated Data Availability (DA) layer, is designed to serve as a shared security layer for Rollup. Given the scalability advantages of the DA chain, it may become the endgame of blockchain expansion, and Celestia is a pioneer project in this regard.
ZK-Rollups can provide more scalability than Optimistic Rollups, which we have observed in practice. For example, the throughput of dYdX is about 10 times that of Optimism, while consuming only 1/5 of the L1 space.
In essence, this is a scalability war, which involves technical terms such as parachain, sidechain, cross-chain bridge, zone, sharding, rollup, and data availability (DA). In this post, we try to cut through that noise and elaborate on this scalability war. So grab a cup of coffee or tea before you buckle up because it's going to be a long ride.
first level title
looking for scalability
As we go through the different designs, we'll highlight some examples of them. But before we get started, it's important to define what scalability means. Simply put, scalability is the ability to process more transactions without increasing the cost of verification. With this in mind, let's take a look at the current tps data for the major blockchains. In this article, we explain the design properties to achieve these different levels of throughput. Importantly, the numbers shown below are not the highest they could have been, but actual values for historical usage of these protocols.
first level title
Monolithic Chain VS Modular Chain
Monolithic Blockchain
First, let's look at monolithic chains, in this camp Polygon PoS and BSC don't fit our definition of scalability because they just increase throughput with larger blocks (a trade-off that increases node resource requirements, and sacrifice decentralization to improve performance). While such tradeoffs have their market fit, they're not long-term solutions, so they're not all that compelling. Polygon recognizes this and is moving to a more sustainable rollup-centric solution.
Solana, on the other hand, is a serious attempt at a fully composable monolithic blockchain frontier, Solana's secret sauce is called a Proof-of-History (PoH) ledger, the idea of PoH is to create a global concept of time (a global clock) , all transactions, including consensus votes, carry a reliable timestamp attached by the issuer. These timestamps allow nodes to make progress without waiting for each other to sync up on each block. Solana processes tx in parallel by optimizing its execution environment, instead of processing one tx at a time like EVM, so as to achieve better expansion.
Although Solana achieves throughput gains, it is still largely due to more intensive hardware and network bandwidth usage. While this reduces fees for users, it limits node operations to a limited number of data centers. This is in contrast to ethereum, which, while inaccessible to many due to high fees, is ultimately governed by its active users, who can run nodes from home.
How can a monolithic blockchain fail?
The scalability of a monolithic blockchain is ultimately limited by the processing power of a single powerful node. Regardless of one's subjective view of decentralization, such capacity can only be pushed hard before limiting governance to a relatively small number of actors. In contrast, a modular chain splits the total workload among different nodes and thus can produce more throughput than any single node can handle.
Crucially, decentralization is only half the picture of modularity. As important as decentralization is another motivation behind modularity, efficient resource pricing (i.e. fees). In a monolithic chain, all tx compete for the same block space and consume the same resources. Thus, in a congested blockchain, excess market demand for a single application can adversely affect all applications on the chain, as fees rise for everyone. This problem has existed since CryptoKitties caused congestion on the Ethereum network in 2017. Importantly, the extra throughput never really solved the problem, it delayed it. The history of the internet has taught us that every increase in capacity makes room for new, nonviable applications that tend to quickly consume the extra capacity just added.
Going forward, it would be naive to expect a single pool of resources to reliably support a wide variety of crypto applications (from Metaverse and games to DeFi and payments). While increasing the throughput of fully composable chains is useful, we need a wider design space and better resource pricing for mainstream adoption. This is where the modular approach comes into play.
first level title
The Evolution of Blockchain
In the sacred mission of scaling, we have witnessed a trend change from "combinability" to "modularity". First, let's define these terms: Composability refers to the ability of applications to seamlessly interact with each other in a way that minimizes friction, while modularity is a tool for decomposing a system into separate parts (modules) that ) can be stripped and reassembled at will.
Ethereum Rollup, ETH 2.0 Fragmentation, Cosmos Zone, Polkadot Parallel Chain, Avalanche Subnet, Near’s Chunk, and Algorand’s Parachain can all be regarded as modules. Each module handles a subset of the total workload in its respective ecosystem while maintaining the ability to communicate across. As we dig deeper into these ecosystems, we notice that modular designs are very different in the way they implement security across modules.
Multi-chain hubs such as Avalanche (Avalanche), Cosmos, and Algorand are best suited for independent security modules, while Ethereum, Polkadot, Near, and Celestia (a relatively new L1 design) envision eventually sharing or inheriting each other's security Sexual modules.
Multi-chain/multi-network Hub
The simplest modular design is called an interoperability hub, which refers to multiple chains/networks communicating with each other through standard protocols. Hubs provide a wider design space so they can customize application-specific blockchains at many different levels, including virtual machines (VMs), node requirements, fee models, and governance. The flexibility of AppChain is unmatched by smart contracts on Universal Chain. Let us briefly review some examples:
Terra, which powers over $8 billion worth of decentralized stablecoins, has a special fee and inflation model that Terra optimizes for the adoption and stability of its stablecoins.
Currently, Osmosis, the cross-chain DEX with the largest IBC processing volume, encrypts tx until they are finalized to prevent front-running front-running transactions.
Algorand and Avalanche are designed to host enterprise use cases on custom networks. These range from a CBDC run by a government agency to a gaming network run by a board of gaming companies. Importantly, the throughput of such a network can be increased with more powerful machines without compromising the level of decentralization of other networks/chains.
Hubs also offer scalability advantages because they use resources more efficiently. Taking Avalanche as an example, C-Chain is used for EVM-compatible smart contracts, while X-Chain is used for P2P payment. Because payments can usually be independent of each other (Bob paying Charlie does not depend on Alice paying Dana), X-Chain can process some tx concurrently. By separating VMs from core utilities, Avalanche can handle more tx.
These ecosystems can also scale vertically through fundamental innovations. Avalanche and Algorand in particular stand out here because they achieve better scaling by reducing the communication overhead of consensus, Avalanche achieves this through a "subsampling voting" process, while Algorand uses cheap VRF nodes to randomly select a unique Committees come to consensus on each block.
Above, we have listed the advantages of the hub approach. However, this approach also suffers from some key limitations. The most obvious limitation is that blockchains need to bootstrap their own security, since they cannot share or inherit each other's security. It is well known that any secure cross-chain communication requires a trusted third party or an assumption of synchrony. In the case of the hub approach, a trusted third party becomes the main validator of the counterparty chain.
For example, tokens linked from one chain to another via IBC can always be redeemed (stolen) by a malicious majority of source chain validators. This majority trust assumption may work well today where only a few chains coexist, however, in a future where there may be a long tail of chains/networks, it is expected that these chains/networks trust each other's validators to communicate or share Mobility, far from ideal. This brings us to rollups and shards that provide cross-chain communication and provide stronger guarantees beyond majority trust assumptions.
(While Cosmos will introduce shared staking across zones, and Avalanche will allow multiple chains to be validated over the same network, these solutions are less scalable because they place higher demands on validators. In practice, they Likely to be adopted by most active chains, not long tail chains)
Data Availability (DA)
After years of research, it is generally accepted that all secure sharing efforts boil down to a very subtle problem called Data Availability (DA), and to understand why we need to take a quick look at how nodes operate in a typical blockchain .
In a typical blockchain (Ethereum), full nodes download and validate all tx, while light nodes only check block headers (block digests submitted by a majority of validators). Therefore, while full nodes can independently detect and reject invalid transactions (such as printing unlimited tokens), light nodes treat anything submitted by a majority as a valid tx.
To improve this, ideally any single full node could secure all light nodes by publishing small proofs. With such a design, light nodes can operate with similar security guarantees as full nodes without expending as many resources. However, this introduces a new problem called data availability (DA).
If a malicious validator publishes a block header but withholds some or all of the transactions in the block, full nodes will not be able to tell whether the block is valid because the missing transactions may be invalid or cause double spending. Without this knowledge, full nodes cannot generate invalid fraud proofs to protect light nodes. In summary, first of all for the protection mechanism to work, light nodes must ensure that validators have provided a complete list of all transactions.
Rollup
The DA problem is an integral part of the modular design that overrides majority trust assumptions when it comes to cross-chain communication. In L2, rollups are special because they don't want to sidestep this problem.
In the environment of rollup, we can regard the main chain (Ethereum) as the light node of rollup (Arbitrum), and Rollup publishes all its transaction data on L1, so that any L1 node willing to put resources together can execute them, and build the rollup state from scratch. With a complete state, anyone can transition a rollup to a new state and prove the validity of the transition by issuing a validity or fraud proof. Having data available on the main chain allows rollups to operate under the assumption of a negligible single honest node, rather than an honest majority.
Consider the following to understand how rollup achieves better scalability with this design:
Since any single node with the current rollup state can protect all other nodes without that state, the centralization risk of the rollup node is less, and thus the rollup blocks can reasonably be made larger.
Even though all L1 nodes download the rollup data related to their transactions, only a small number of nodes execute these tx and build the rollup state, thus reducing the overall resource consumption.
The rollup data is compressed using clever techniques before being released to L1.
Similar to Appchain, rollups can tailor their VMs for specific use cases, which means more efficient use of resources.
As of now, we all know that there are two major types of rollup: Optimistic rollup and ZK-rollup. From the perspective of scalability, ZK-rollup has more advantages than Optimistic rollup because they compress data in a more efficient way, thus A lower L1 footprint is achieved in some use cases. This nuance is already observable in practice. Optimism publishes data to L1 to reflect each tx, while dYdX publishes data to reflect each account balance. Therefore, the L1 footprint of dYdX is 1/5 of that of Optimism, and the estimated processing throughput is about 10 times different. This advantage will naturally translate into lower fees for the ZK-rollup layer 2 network.
Unlike fraud proofs on Optimistic rollups, validity proofs from ZK-rollups also support a new scalability solution called volition. While the full impact of volitions remains to be seen, they seem very promising as they give users the freedom to decide whether to publish data on-chain or off-chain. This allows users to decide their security level based on the type of their transactions. Both zkSync and Starkware will be launching volition solutions in the next few weeks/months.
Although rollup applies clever techniques to compress data, all data must still be published to all L1 nodes. As a result, rollups can only provide linear scaling benefits and are limited in reducing fees, and they are also highly susceptible to Ethereum gas price fluctuations. In order to scale sustainably, Ethereum needs to expand its data capacity, which explains the necessity of Ethereum sharding.
Sharding and Data Availability (DA) Proofs
Sharding further relaxes the requirement that all main-chain nodes download all data, and instead leverages a new primitive called DA proofs to achieve higher scalability. Using DA proof, each node only needs to download a small part of the shard chain data, and knowing a small part of it can jointly reconstruct all the shard chain blocks. This enables shared security across shards, as it ensures that any single shard chain node can raise a dispute, which is resolved by all nodes on demand. Polkadot and Near have implemented DA proofs in their sharding designs, which will also be adopted by ETH 2.0.
At this point, it’s worth mentioning how the ETH 2.0 sharding roadmap differs from other roadmaps. Although Ethereum's initial roadmap was to be like Polkadot, it seems to have recently moved towards sharding-only data. In other words, the shards on Ethereum will serve as the DA layer of rollup. This means that Ethereum will continue to be single-state as it is today. In contrast, Polkadot performs all execution on a base layer with different state per shard.
A major advantage of having shards as a pure data layer is that rollups have the flexibility to dump data onto multiple shards while remaining fully composable. Therefore, the throughput and cost of a rollup are not limited by the data capacity of a single shard. With 64 shards, the maximum total throughput of rollup is expected to increase from 5K TPS to 100K TPS. In contrast, fees are bound by the limited throughput (1000-1500 TPS) of a single parachain no matter how much throughput Polkadot generates as a whole.
Dedicated DA layer
A dedicated DA layer is the latest form of modular blockchain design. They use the basic idea of the ETH 2.0 DA layer, but steer it in a different direction. The pioneering project in this regard is Celestia, but newer solutions, such as Polygon Avail, are also moving in this direction.
Similar to ETH 2.0's DA shards, Celestia acts as a base layer into which other chains (rollups) can plug in to inherit security. Celestia's solution differs from Ethereum in two fundamental ways:
It does not perform any meaningful state enforcement at the base layer (while ETH 2.0 does). This insulates rollups from highly unreliable base-layer fees that, in a stateful environment, could spike as token sales, NFT airdrops, or high-yield farming opportunities arise. Rollup consumes the same resources (i.e. bytes in the base layer) as it does for security, and only for security. This efficiency allows rollup fees to be primarily associated with that particular rollup rather than base layer usage.
As with all designs, a dedicated DA layer has some drawbacks. An immediate downside is the lack of a default settlement layer. Therefore, in order to share assets with each other, rollups must implement methods to account for each other's fraud proofs.
Summarize
Summarize
