Risk Warning: Beware of illegal fundraising in the name of 'virtual currency' and 'blockchain'. — Five departments including the Banking and Insurance Regulatory Commission
Information
Discover
Search
Login
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt
BTC
ETH
HTX
SOL
BNB
View Market
Interpreting Vitalik’s new article: Why has Rollup, whose Blob space is not used efficiently, fallen into a development dilemma?
星球君的朋友们
Odaily资深作者
2024-04-01 03:20
This article is about 1736 words, reading the full article takes about 3 minutes
The performance of Cancun is usable after the upgrade, but Vitalik is worried about the development of Rollup.

Original author: Haotian

How to understand Vitalik Buterin’s new article’s thoughts on the expansion of Ethereum? Some people say that Vitalik’s order for Blob inscription is outrageous. So how do blob packets work? Why is the blob space not being used efficiently after the Cancun upgrade? DAS data availability sampling in preparation for sharding?

It seems to me that Vitalik is worried about the development of Rollup after the Cancun upgrade is usable. Why? Next, let me talk about my understanding:

1) As explained many times before, Blob is a temporary data package that is decoupled from EVM calldata and can be directly called by the consensus layer. The direct benefit is that EVM does not need to access Blob data when executing transactions, so it cannot produce higher execution rates. layer calculation fee.

Currently, a series of factors are balanced. The size of 1 Blob is 128 kb. A Batch transaction to the main network can carry at most two Blobs. Ideally, the ultimate goal of a main network block is to carry 16 MB of approximately 128 Blob packets.

Therefore, the Rollup project team must balance factors such as the number of Blob blocks, TPS transaction capacity, and Blob main network node storage costs as much as possible, with the goal of using Blob space with the optimal cost performance.

Taking Optimism as an example, there are currently about 500,000 transactions a day. On average, a transaction is batched to the main network every 2 minutes, carrying 1 Blob packet at a time. Why bring one? Because there are only so many TPSs that you won’t use. Of course, you can also bring two. Then the capacity of each blob will not be full, but it will increase the storage cost, which is unnecessary.

What should we do when the volume of transactions off the Rollup chain increases, for example, 50 million transactions are processed every day? 1. Compress compresses the transaction volume of each Batch and allows as many transactions as possible in the Blob space; 2. Increases the number of Blobs; 3. Shortens the frequency of Batch transactions;

2) Since the amount of data carried by the main network block is affected by Gas Limit and storage cost, 128 Blobs per Slot block is an ideal state. Currently, we do not use that many. Optimism only uses 1 every 2 minutes, leaving it to the layer. 2 There is still a lot of room for the project party to improve TPS and expand the number of market users and ecological prosperity.

Therefore, for a period of time after the Cancun upgrade, Rollup did not volume in terms of the number and frequency of blobs used, and the use of blob space bidding.

The reason why Vitalik mentions Blob inscriptions is because this type of inscription can temporarily increase the transaction volume, which will lead to an increase in the demand for Blob usage, thus expanding the size. Using inscriptions as an example can provide a deeper understanding of the working mechanism of Blobs. What Vitalik really wants to express and the inscriptions It doesnt really matter.

Because in theory, if a layer 2 project party performs high-frequency and high-capacity batch transactions to the main network, and fills up the Blob block every time, as long as it is willing to bear the high cost of forged transaction batches, it will affect other layer 2 The normal use of Blob, but under the current situation, it is like someone buying computing power to conduct a 51% hard fork attack on BTC. It is theoretically feasible, but in practice it lacks profit motivation.

The introduction of Blob is to reduce the burden on EVM and improve the operation and maintenance capabilities of nodes, which is undoubtedly a tailor-made solution for Rollup. Obviously, it is not being used efficiently at the moment, and the gas cost of the second layer will be stable in the lower range for a long time. This will give the layer 2 market a long-term golden development window to “increase troops and gather food”.

3) So, what if one day the layer 2 market prospers to a certain extent, and the number of transactions from Batch to the mainnet reaches a huge amount every day, and the current Blob data packets are not enough? Ethereum has already provided a solution: using data availability sampling technology (DAS):

A simple understanding is that the data that originally needs to be stored in one node can be distributed in multiple nodes at the same time. For example, each node stores 1/8 of all Blob data, and 8 nodes form a group to meet the DA capability, which is equivalent to the current The Blob storage capacity has been expanded by 8 times. This is actually what Sharding will do in the future.

But now Vitalik has reiterated this many times, very charmingly, and seems to be warning the majority of layer 2 project parties: Don’t always complain that Ethereum’s DA capacity is expensive. With your current TPS capacity, you have not developed the ability of Blob data packets to the extreme. Hurry up and add it. Let’s develop the ecology with great force and expand users and transaction volume. Don’t always think about DA running away to do the one-click chain creation.

Later, Vitalik added that among the current core rollups, only Arbitrum has reached Stage 1. Although DeGate, Fuel, etc. have reached Stage 2, they have not yet been familiar with the wider group. Stage 2 is the ultimate goal of Rollup security. Very few Rollups have reached Stage 1, and most Rollups are in Stage 0. It can be seen that the development of the Rollup industry really worries Vitalik.

4) In fact, in terms of the expansion bottleneck problem, there is still a lot of room for the Rollup layer 2 solution to improve performance.

1. Use Blob space more efficiently through data compression. OP-Rollup currently has a dedicated Compressor component to perform this work. ZK-Rollups own off-chain compression SNARK/STARK proves that submitting to the main network is compressing;

2. Reduce layer 2’s dependence on the main network as much as possible, and only use optimistic proof technology to ensure L2 security under special circumstances. For example, most of Plasma’s data is on the chain, but deposits and withdrawals occur on the main network. Therefore the mainnet can promise its security.

This means that layer 2 should only consider important operations such as deposits and withdrawals to be strongly related to the main network, which not only reduces the burden on the main network, but also enhances the performance of L2 itself. The parallel processing capability of Sequencer mentioned before when talking about parallel EVM can filter, classify and pre-process a large number of transactions off-chain, as well as the hybrid rollup promoted by Metis. Normal transactions go through OP-Rollup, special withdrawal requests go through ZK Route, etc. All have similar considerations.

above.

In short, Vitalik’s article thinking about Ethereum’s future expansion plans is very enlightening. In particular, he was dissatisfied with the current development status of layer 2, optimistic about the performance space of Blobs, and looked forward to future sharding technology. He even pointed out some directions for layer 2 worth optimizing, etc.

In fact, the only uncertainty now is left to layer 2 itself. How to accelerate development?


Vitalik
ETH
Layer 2
Welcome to Join Odaily Official Community