BTC
ETH
HTX
SOL
BNB
View Market
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt

OP+ZK, will Hybrid Rollup become the ultimate future of Ethereum scalability?

区块律动BlockBeats
特邀专栏作者
2023-07-01 02:30
This article is about 3094 words, reading the full article takes about 5 minutes
The future of Ethereum Rollup actually involves a combination of two main methods: ZK and Optimistic.
AI Summary
Expand
The future of Ethereum Rollup actually involves a combination of two main methods: ZK and Optimistic.

Original author: @kelvinfichter

Original translation: Jaleel, BlockBeats

I recently became convinced that the future of Ethereum Rollups is actually a combination of these two main methods: ZK and Optimistic. In this article, I will try to explain the key points of this architecture as I imagine it, and why I believe this is the direction we should move in. Please note that I spent most of my time studying Optimism, also known as Optimistic Rollup, so I am not an expert in ZK. If I make any mistakes when talking about ZK, please feel free to contact me and I will correct them.

I do not intend to go into detail about how ZK and Optimistic Rollups work in this article. If I were to spend time explaining the essence of Rollups, this article would be too long. So this article is based on the assumption that you already have a certain understanding of these technologies. You don't have to be an expert, but you should at least know what ZK and Optimistic Rollups are and how they roughly work. In any case, enjoy reading this article.

Let's start with Optimistic Rollup

The system that combines ZK and Optimistic Rollup initially started with Optimism's Bedrock architecture for Optimistic Rollup. Bedrock is designed to be highly compatible with Ethereum ("EVM Equivalence"), which is achieved by running an execution client that is almost identical to an Ethereum client. Bedrock takes advantage of Ethereum's upcoming consensus/execution client separation model, significantly reducing the differences with the EVM (although there will always be some changes along the way, but we can handle them).

And like all great Rollups, Optimism extracts block/transaction data from Ethereum, sorts it in a deterministic way within a consensus client, and feeds this data to an L2 execution client. This architecture solves the first part of the "ideal Rollup" puzzle and gives us an L2 that is equivalent to the EVM.

Of course, the problem we need to solve now is how to inform Ethereum about what happens inside Optimism in a verifiable way. If this problem is not solved, smart contracts cannot make decisions based on the state of Optimism. This would mean that users could deposit into Optimism but not withdraw their assets. While in some cases one-way Rollups can be achieved, in most cases, two-way Rollups are more effective.

By providing a commitment to this state and proving that the commitment is correct, we can inform Ethereum about the state of all Rollups. In other words, we are proving that the "Rollup program" is executed correctly. The only substantive difference between ZK and Optimistic Rollups is the form of this proof. In ZK Rollup, you need to provide a specific zero-knowledge proof to prove the correct execution of the program. In Optimistic Rollup, you can make a declaration without providing explicit evidence, and other users can challenge and question your declaration, forcing you to participate in a deliberation and challenging "game" to determine who is right.

I don't intend to go into detail about the challenges of Optimistic Rollup. It's worth noting that the latest technology in this area is to compile your program (in the case of Optimism, Geth EVM + some edge components) into a simple machine architecture, such as MIPS. We do this because we need to build an interpreter for the program on the chain, and building a MIPS interpreter is much easier than building an EVM interpreter. EVM is also a moving target (we have regular upgrade forks) and it doesn't fully encompass the programs we want to prove (it also has some non-EVM elements).

Once you have built an on-chain interpreter for your simple machine architecture and created some offline tools, you should have a fully functional Optimistic Rollup.

Moving towards ZK Rollup

Overall, I firmly believe that Optimistic Rollups will dominate in the coming years. Some believe that ZK Rollups will eventually surpass Optimistic Rollups, but I do not agree with this view. I think the current relative simplicity and flexibility of Optimistic Rollups means they can gradually transition into ZK Rollups. If we can find a pattern to achieve this transition, then there is no need to go through the effort of building a less flexible and more fragile ZK ecosystem when we can simply deploy it into an existing Optimistic Rollup ecosystem.

Therefore, my goal is to create an architecture and migration path that allows existing modern OP ecosystems (such as Bedrock) to seamlessly transform into ZK ecosystems. I believe this is not only feasible but also a way to go beyond the current zkEVM approach.

We start with the Bedrock architecture I described earlier. Please note that I have already briefly explained that Bedrock has a challenge game that can verify the validity of certain executions of L2 programs (running MIPS programs with EVM and some additional content). One major drawback of this approach is that we need to allow time for users to detect and successfully challenge a incorrect program results proposal. This adds a significant amount of time to the asset withdrawal process (7 days on the current Optimism mainnet).

However, our L2 is just a program running on a simple machine (such as MIPS). It is entirely possible to construct a ZK circuit for this simple mechanism. Then, we can use this circuit to prove the correct execution of the L2 program explicitly. Without making any modifications to the current Bedrock codebase, you can start releasing validity proofs for Optimism. It's as simple as that.

Why is this method reliable?

Let me clarify: although in this section I mentioned "zkMIPS," in reality, I use it as a term representing all general and simplified zero-knowledge proof virtual machines (zkVM).

zkMIPS is easier than zkEVM

Building a zkMIPS (or any other type of zk virtual machine) has a significant advantage over zkEVM: the target machine's architecture is simple and static. EVM frequently undergoes changes, gas prices are adjusted, opcodes are modified, and some elements are added or removed. MIPS-V, on the other hand, has remained unchanged since 1996. By focusing on zkMIPS, you are dealing with a fixed problem space. Whenever EVM updates, you don't need to modify or re-audit your circuit.

zkMIPS is more flexible than zkEVM

Another key point is that zkMIPS is more flexible than zkEVM. With zkMIPS, you can freely change client code to perform various optimizations or improve user experience without requiring corresponding circuit updates. You can even create a core component to turn any blockchain into a ZK Rollup, not just Ethereum.

Your task becomes proof time

Zero-knowledge proof time scales along two dimensions: the number of constraints and the size of the circuit. By focusing on a simple machine like MIPS (rather than a more complex machine like EVM), we can significantly reduce the size and complexity of the circuit. However, the number of constraints depends on the number of machine instructions executed. Each EVM opcode is decomposed into multiple MIPS opcodes, which means the number of constraints increases significantly, and your overall proof time also increases significantly.

However, reducing proof time is also a problem deeply rooted in the Web2 field. Considering that the MIPS machine architecture is unlikely to change in the short term, we can highly optimize circuits and proofers without considering the future changes of the EVM. I am confident in hiring a senior hardware engineer to optimize a well-defined problem. The number of such engineers may be ten times or even a hundred times the number of engineers needed to build and review a constantly changing zkEVM target. Companies like Netflix may have a large number of hardware engineers optimizing transcoding chips, and they are likely to be willing to embrace this interesting ZK challenge with a bunch of venture capital funds.

The initial proof time for a circuit like this may exceed the 7-day withdrawal period of Optimistic Rollup. Over time, this proof time will only decrease. By introducing ASIC and FPGA, we can significantly accelerate the proof time. With a static target, we can build a more optimized prover.

Eventually, the proof time for this circuit will be lower than the current 7-day withdrawal period of Optimism, and we can start considering removing the challenge process of Optimism. Running a prover for 7 days may still be too expensive, so we may want to wait a while longer, but this point is valid. You can even run two proof systems at the same time, so that we can start using ZK proofs as soon as possible and fall back to Optimism proofs if the prover fails for any reason. When ready, Optimism proofs can be removed in a completely transparent manner to the application, and your Optimistic Rollup becomes a ZK Rollup.

You can focus on other important issues.

Running a blockchain is a complex problem that involves not only writing a lot of backend code. At Optimism, much of our work is focused on improving the experience of users and developers by providing useful client tools. We also invest a lot of time and effort in "soft" issues: engaging with projects, understanding their pain points, designing incentive mechanisms. The more time you spend on chain software, the less time you have to deal with these other things. While you can always try to hire more people, organizations do not scale linearly, and each new employee increases internal communication costs.

Due to the fact that the operation of zero-knowledge circuits can be directly applied to existing chains, you can simultaneously build the core platform and develop the proof software. Since the client can make modifications without changing the circuit, you can decouple your client from the proof team. Optimistic Rollup, implemented in this way, may be years ahead of zero-knowledge competitors in terms of actual chain activity.

Conclusion

To be frank, I see no obvious shortcomings in the zkMIPS prover, unless it cannot be significantly optimized over time. The only real impact on applications, I believe, would be the need to adjust the gas cost of different opcodes to reflect the increased proof time of these opcodes. If it is truly impossible to optimize this prover to a reasonable level, then I admit that I have failed. However, if this prover can indeed be optimized, the zkMIPS/zkVM approach may fully replace the current zkEVM method. This may sound like a radical statement, but not long ago, single-step optimistic fault proofs were completely replaced by multi-step proofs.

Original Article Link

Optimism
ZK Rollup
Welcome to Join Odaily Official Community