BTC
ETH
HTX
SOL
BNB
View Market
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt

Dialogue with Scroll Lianchuang: Scroll and ZK go together

星球君的朋友们
Odaily资深作者
2023-10-19 02:49
This article is about 20161 words, reading the full article takes about 29 minutes
Let’s talk about the design and trade-offs of zkEVM, the choice of proof system, the hardware-accelerated prover network, and the future of ZK.
AI Summary
Expand
Let’s talk about the design and trade-offs of zkEVM, the choice of proof system, the hardware-accelerated prover network, and the future of ZK.

Original source: Scroll CN

Scroll Talk is a podcast hosted by Scroll CN. We will talk to the Scroll team and Scroll ecological project parties through different forms to help everyone understand Scroll better.

In this episode, we invited Ye Zhang, co-founder of Scroll, to talk with him about Scroll and ZK, including the design and trade-offs of zkEVM, the choice of proof system, the hardware-accelerated prover network, and ZK’s future.

opening

FF: Hello everyone, welcome to Scroll Talk. Today I am very happy to invite Zhang Ye, the co-founder of Scroll. We at Scroll CN have also published many interviews and speeches about Zhang Ye. Then this should be the first face-to-face communication with Ye, and first of all, thank you very much Ye for coming. In the entire zero-knowledge proof community, Ye is now very influential, but we still want Ye to briefly introduce himself first.

Ye:Hello, thank you to Scroll CN for arranging this interview, and I have always thanked Scroll CN for its contribution to the Chinese community, including the high quality of translations, which has helped us have a great influence in the Chinese community. Let me briefly introduce myself. Hello everyone, my name is Zhang Ye, and I am one of the co-founders of Scroll. My main focus is research related to partial zero knowledge proofs. I previously focused on three directions.

The first direction is hardware acceleration of zero-knowledge proofs. We started working on this direction about five years ago, because one of the biggest bottlenecks in using zero-knowledge proofs five years ago was that generating proofs was very slow. For example, applications like Zcash take a long time to generate a proof for a transaction, maybe 10 minutes or more. Then this leads to saying that zero-knowledge proof cannot be used by many systems because the proof efficiency is too low. So my first research direction is how to accelerate proof generation through GPU, FPGA and ASIC hardware.

The second direction is the cryptography and mathematics at the backend of partial zero knowledge proof. Because zero-knowledge proof is a very complex cryptographic protocol that involves a lot of mathematics such as polynomials. Then my main research work is to read a lot of papers to see how to optimize some existing algorithms, which is more theoretical.

The third direction is more application-oriented, that is, how to design the architecture and circuit for zkEVM, and how to generate proofs for zkEVM.

In general, it covers roughly three directions, hardware acceleration of zero-knowledge proof, theoretical algorithms of zero-knowledge proof, and related applications of zero-knowledge proof.

Then at Scroll, I will mainly focus on some research-oriented work, including research related to zero-knowledge proofs, protocol design-level research, and then some related strategies for the entire company.

FF: Thank you Ye, and we know that you have been doing ZK research. What kind of opportunity did you choose to found Scroll, and then you have been deeply involved in the ZK field. What motivations have inspired you to keep moving forward?

Ye:This is a very different story. Now most people have heard of zero-knowledge proofs or learned ZK because everyone realized that blockchain has the need for zero-knowledge proofs, but in fact my learning process was the opposite. In fact, I went the other way. I was attracted to ZK first, and then discovered that ZK can be used in the blockchain. When I was an undergraduate, I was doing some research related to hardware acceleration algorithms with a senior senior in the laboratory. Then the most popular thing at that time was actually doing AI acceleration, but I was actually not very interested in AI. I felt that the process of adjusting parameters did not have a mathematical model that I could understand, which is why this parameter resulted in this result after training. . I prefer some more deterministic mathematics, where I know the probability of something happening. So I naturally prefer cryptographic number theory, and then I discovered the zero-knowledge proof algorithm at that time, and then discovered that it has a very large need for hardware acceleration. So I started to do related research. Later, in the process of researching acceleration algorithms, I found that the charm of the algorithm itself is greater than hardware acceleration, because it involves many very clever polynomial constructions and some protocols. structure, etc. If you really look deeply at any zero-knowledge proof protocol, you will find that it is really clever, coding the program through polynomial design. Then we use some polynomial points to verify some properties of the polynomial, and finally compress it into a very, very small proof. The entire mathematical structure is very clever. So when I first entered the zero-knowledge proof industry, I was completely attracted by the charm of its mathematical structure. Later, I discovered that the thing I was studying could just solve the biggest problem that blockchain currently encounters, which is expansion.

Then I later realized that the entire Ethereum ecosystem is very prosperous, and it has a very good open source community, which is more in line with my own personal beliefs. Its entire research atmosphere, attitude of embracing open source, and pursuit of academics The rigorous style also made me completely attracted to it, and at the same time I realized that the entire blockchain is not a suspended story, but a real architecture that can solve the problems in many peoples lives. It may be the next generation of financial infrastructure, and many people really need transparency and censorship resistance. So I think blockchain has real application scenarios, and at the same time, my technology can solve this problem.

The beginning of 2021 is the best time node, because the efficiency of zero-knowledge proofs has improved by 2-3 orders of magnitude. When a technology improves by 2-3 orders of magnitude, there will actually be huge opportunities, whether it is entrepreneurial opportunities or other new opportunities. Because problems that could not be solved before can now be solved. At that time, I felt that zkEVM was actually the biggest opportunity. At that time, not many people were doing it, or not many people were making it. Then we had a very good opportunity, coupled with the accumulation of such technology, we could solve such problems, so we started to work on Scroll.

In fact, I was also working on zero-knowledge proofs when I was studying for my Ph.D. But I realized a problem, that is, if you are in the industry, such as Scroll, you have a lot of flexibility to do ZK-related research, but in school you have to cooperate with a mentor, and then you may only be able to do ZK-related research. Research in one direction.

But at Scroll, you may have more flexibility, because the problem you solve first is a real industry problem, so after you solve this problem, its impact will be greater. And secondly, you will not be limited to one direction of doctoral research. You can cooperate with more people through some Grants and other ways. So. In fact, at Scroll, I am doing the same thing, but it will be more influential in the industry, solve the most real problems, and the scope of cooperation will be wider. So I think this path will be more attractive to me than studying for a Ph.D.

ZK technology development and future

FF: I understand. Thank you. Ye, I understand that it is the fascinating charm of mathematics behind ZK that attracts you to continue doing related research. I heard that the main breakthrough point should be two or three years ago. Did ZK have a huge breakthrough similar to the emergence of ChatGPT this year?

Ye:Yes, yes, I think so, but its not like ChatGPT, which was a breaking point and suddenly exploded everything. It is a process of superposition of many layers of factors. For example, the hardware acceleration direction that I have been studying before can increase the efficiency of zero-knowledge proofs by 10 to 100 times; and then coupled with some new polynomial representations of circuits, For example, through higher-order custom gates (Custom Gate) and lookup tables (Lookup Table), it can be expressed more efficiently and reduce the cost by 10 times; and then there are some recursive proofs (Recursive Proof) that can aggregate many proofs. Up, it saves a lot of cost in verification. So I think the combination of these three points will lead to a huge efficiency improvement.

Of course, the final result is an improvement in efficiency, but it is not like ChatGPT, which is a suddenly popular thing, but is the result of the efforts of cryptography experts and many hardware engineers.

FF: Since we have talked about AI, what do you think of the current combination of ZK and AI, including the fact that Worldcoin has been released and uses ZKML technology. Do you think that in the intersection of ZKML and ZK, AI will What are the differences in each field?   

Ye:I think there are indeed many people working on ZKML now, but I think this direction is still quite early. It has some application scenarios, such as identifying whether a photo is really taken by your camera. There is no complicated PS; it can be proved Whether the audio belongs to a certain person; it can prove whether Microsoft is providing the same model to everyone, because when you give the model an input and then it returns an output to you, you cannot be sure whether it is different people. Change to a different model. There will be some small application scenarios like this, but I havent seen a particularly big need, why ZKML must be as widespread an application as AI.

Because for example, like ChatGPT, most companies that own models have absolute strength. They dont have to prove to you that I must use this model, and you cant ask them to do this. I think unless it happens in a market where there are 10 companies like ChatGPT, and ChatGPT doesn’t want to do it, then one of the other companies is willing to do it. A wave of user groups with such needs will choose the services provided by this company. But now the entire market is one where only a few companies can build models like ChatGPT, and they dont have a strong motivation or incentive to help you do such a thing. So I think this road is still quite long. In addition, there are many unresolved problems such as photos or audio. You may also need some hardware to build this system.

Overall, I think there is still a long way to go. Of course, I think ZKML will have some new gameplay strategies in liquidity management, and then there may be some small use cases. Larger application scenarios still require time to test. , to find a market-matching product. Moreover, ZKML cannot actually prove that the training process is correct. It can only prove that an influence factor is correct, so this further limits what it can do. I think there is still a certain way to go.

Most ZKML companies are still developing tools. I know that some ZKML companies are thinking of ways to directly convert code written in TensorFlow or Pytorch into ZK circuits and directly generate proofs. This may be a very interesting direction. Let’s start with DSL and SDK first, and then encourage everyone to come up with new innovations. These are still very early days. I think it may eventually develop into ZK-like general computing, but some algorithm libraries that are more suitable for ML, such as some matrix multiplication or convolution, will be more conducive to such applications instead of just ZKML. , I think there is still a long way to go.

And one of the more cutting-edge people in this area is Daniel Kang, a professor at UIUC. We have previously invited him to give a speech at Scroll’s ZK Symposium, so if you are interested in this direction, you can check out our of this series.

https://www.youtube.com/watch?v=Co5gNoHnMhs&list= PLrzRr 7 okCcmbAlgYpuFjzUJv 8 tAyowDQY &index=14

FF: Okay, thank you. Then I heard that ZKML is still relatively early. It is currently in some relatively small construction directions. When it comes to general computing, it is still relatively early. It may be that the market will be very mature in the later period and everyone will have privacy needs. ZKML will have it. of use. From a larger perspective of ZK, Vitalik made a statement before, which roughly means that ZK and blockchain are equally important concepts. What do you think of this point of view?

Ye:I think this is indeed the case, because ZK really solves many problems that blockchain cannot solve, and it is a very perfect combination. Blockchain cannot solve the problem of expansion. ZK can compress the calculation and solve the problem of expansion. Blockchain is always open and transparent. It cannot solve privacy issues. ZK can hide information and solve privacy issues. So I think ZK and blockchain are a natural and very good combination.

In addition, ZK is now supporting general computing very quickly, so I think it has a very big opportunity. For example, from the aspect of privacy, such as private transactions, privacy pools, including some on-chain poker, if you don’t want others to see your cards after the cards are dealt to the chain, you can hide the information through zero-knowledge proof. Games that hide information like this can only be implemented on the blockchain through ZK. In terms of privacy, ZK Identity is also a very interesting small direction and very promising. How to get one billion users to use the blockchain? Then we may need to ZK-ify some existing identity systems before everyone is willing to use it. Some information is above.

Then related to scalability, for example, various rollups are used to compress calculations, various co-processors are used to compress some calculations, and finally the proof is put on the chain. This is a very good combination of on-chain and off-chain.

There are also some very interesting small directions that are promising. Some teams are building some ZK cross-chain bridges or ZK hardware to provide services. But I think it still takes a few years for it to mature. Regardless of the convenience of the developer SDK or the efficiency of ZK, there is still a long way to go in terms of security.

FF: I understand, thank you Ye. From your description, ZK and blockchain are equivalent to a complementary relationship. In addition to the application scenarios just mentioned, from the perspective of efficiency and fairness, do you think this technological innovation is beneficial to What changes will the real world bring?

Ye:I think any calculation can be made trustless, which is a very strong feature. You can throw any computation into a trustless platform and have it return you a result and generate a proof that its result is correct. This ensures the correctness and verifiability of your calculations. Then, as I said, there are various applications such as identity, privacy, scalability, etc.

FF: I understand, thank you. That is to say, ZK may have a blessing on general computing. Whether it is providing privacy or credibility, it is a very promising direction. If Scroll is not made, If you are asked to start a second business in the ZK field, which track and direction will you choose?

Ye:This is a very difficult question. I think first of all, zkEVM is definitely the largest direction because it carries the Ethereum traffic entrance and must be the largest one. If I have to choose another direction, I am personally very optimistic about being a co-processor, which can make non-EVM calculations very efficient and verifiable. The other one is identity protocol. I think it is very difficult to build a good identity system, and it can solve many real-life problems. Especially when I went to Africa before, I saw a lot of financial infrastructure. There are various problems caused by immaturity, so I think identity will be a big direction.

Then if I personally have to choose, I think if the size is not very large, I think the identity opportunity is great, and if it is a very strong engineering team, it must do more complex things, I think this ZK Co-processors would be a better direction, but there are already too many people on this track. So I think identity will be a track that has not yet been popularized by the public, and I think it is a direction that requires not only technology, but also business strategy. You have to think about which business partners you want to cooperate with, and whether you can directly ZKize their large amounts of data, so that you can expand your user base more quickly. Technology may be a smaller problem.

If you are a very innovative person, you can also try the ZK game direction. The game requires you to have a good design and ZK the information that needs to be hidden. But ZK is not a panacea, and it cannot solve all privacy problems. It will require the prover to know certain information. So I think this game needs to be designed very cleverly so that ZK can be used to the extreme. If you are a person who has great ideas and likes to play games, it is also a very interesting direction to think clearly about your game logic and then create a very interesting ZK game.

FF: Thank you. You just mentioned three directions. The first is the co-processor, which is similar to what Axiom is doing; the second is the identity direction, which can understand what Worldcoin is doing, which is a special example; the third It should be a game, and it will be a direction that ordinary users will come into contact with on a daily basis. Then Ye, you just mentioned that you just came back from Africa. By the way, we would like to ask you what you gained from your trip to Africa this time to promote ZK’s technology, including Ethereum.

Ye:This time is still a very unique experience. Let me briefly introduce the background. In February this year, Vitalik, Aya from the Ethereum Foundation and others went to 4 countries in Africa. They may have close to a month to hold some activities with African communities, and then go to meet some African founders to learn about the situation on this continent. Because the Ethereum community in Africa is actually relatively small, they went there to understand some of the current status of the community. What does the community need now to spread the value of Ethereum? They concluded at the time that Ethereum was still a bit expensive.

They hope to arrange a Layer 2 trip to bring Layer 2 of Ethereum to Africa. Because people in Africa cannot afford Ethereum, they can only enter the world of Ethereum through Layer 2. So around April or May this year, I got to know Joseph, the organizer of their trip, through Vitalik’s introduction, to see if there was an opportunity to organize a Layer 2 trip. After chatting, we felt that our values ​​were very consistent. Because part of Scroll’s values ​​is that we want to bring real users and use cases to the blockchain, so we feel very excited to learn about some of Africa’s real needs.

After going there, I will find that it is really different, which makes me more confident about real use cases in developing and emerging countries. Because in fact, before going to Africa, many people including those I am in contact with now are still questioning whether blockchain is a real need. Is it just a scam, just a tool for everyone to issue tokens? I think people who can make such remarks are actually because whether in China, other parts of Asia, or in the West, the United States and Europe, everyone’s understanding of blockchain is giant whales and liquidity mining. It’s not that they really need blockchain in their lives, but they just think that there are some tools on it that can allow them to make more profits, and sometimes it may be safer to put their property on it. Its not a particularly necessary tool.

The two countries we went to were Kenya and Nigeria, and we clearly felt that people there really need blockchain as a platform in their daily lives. A very obvious example is that when transferring money between two neighboring countries in Africa, there is no way to transfer money directly through the bank, and you need to go around a long way to get the money. Because their financial infrastructure is really poor and they are completely unable to build a global support system.

So they really need a payment tool first, and blockchain is very useful just as a payment tool and can really change their lives. Because if they go to other neighboring countries, they need blockchain payment media. Many people say that what blockchain can do is just a global payment system, which sounds like a very single purpose. But in fact, the global payment system can solve the needs of many people, especially in countries where the financial infrastructure is not so complete. But because you may be in China, the United States, or Europe, where the infrastructure is very complete, you usually dont worry about such problems at all.

The second thing is that their inflation is very high. Their currency has probably had an inflation rate of 10% from where we were until now. Imagine that the RMB or U.S. dollar in your hand will depreciate by 10% in one month, and your financial management may only increase by 3 to 4% in a year, and then prices keep rising. This affects their lives greatly, and stablecoins are a way for them to obtain U.S. dollars. They need U.S. dollars because the U.S. dollar inflation rate is relatively low, so they hope to obtain U.S. dollars, but it is impossible for them to obtain U.S. dollars because they It is not possible to open an account with a US bank. So they actually buy USD stablecoins and then hold some assets on the chain. They obtain USDT, which is a very important way to prevent hyperinflation. Maybe everyone can just hold RMB in China, and USDT is only needed when buying cryptocurrencies. But they are all needed in real life. They will frequently conduct OTC transactions and convert them into their own currencies when they are actually used. So I think this is a big application scenario, and in these countries and many other places, they really have this need.

Then the third thing is that because their financial infrastructure is imperfect, their credit ratings and identities are very imperfect when they borrow money. Therefore, it may take them a month to borrow $100, for example, and it also requires various approvals because information does not flow between various financial institutions. So this leads to the fact that lending, as a very big business of banks or many financial institutions, is very imperfect here. So I think this is a huge opportunity as well.

There are many real application scenarios that require blockchain in Africa. For example, if there is a good identity system to solve these problems for them and provide them with some loans or other services on the chain, I think it will be a very valuable thing. This is the first time I feel that your technology is really changing the lives of people in many corners of the world. This is a very important thing.

Part of Scroll’s values ​​is that we want to bring the next billion people into Ethereum. People often complain that BSC is very decentralized and Ethereum is very decentralized, but Ethereum is expensive. Then there are many real users on BSC who are just playing, because there is Binance, and what I saw for the first time in Africa is that many people are really using Binance to make payments, because it is very simple and easy to use. . We hope to bring these real users back to Ethereum. This is part of our mission. We want to bring the next billion users back to Ethereum, which is less trustworthy, by reducing fees through Layer 2. Because if you keep your money in a centralized exchange, some problems may arise. So we hope to put it on a Layer 2 and inherit the security of Ethereum. This is a good opportunity.

Imagine a future where crypto plays a vital role in daily life and blockchain gains real-world adoption, especially in emerging economies.

  • Children in Turkey can buy ice cream on a hot summer day with a stablecoin on Scroll, which allows them to exchange cryptocurrencies for Turkish lira with just one click.

  • An elderly man in Argentina can obtain government benefits and subsidies on Scroll, reducing fraud and ensuring fair distribution of funds.

  • Filipino merchants can send money across borders in seconds through Scroll without having to go through many middlemen.

  • Kenyan farmers can obtain loans through the transparent credit scoring system on Scroll, which solves the problem of trust and improves the utilization of working capital.

These things will all happen at the same time, including institutions going on-chain, governments issuing stablecoins, and the relaxation of legal compliance in different regions.

We believe the next billion users will come from places where there is a real need for cryptocurrency. Scroll aims to bring these users into the crypto ecosystem and solve real-world problems such as financial inclusion, social coordination, and personal sovereignty.

The second point is that Scroll’s values ​​are that we will not do a lot of marketing announcements and go to different places to promote ourselves. We hope to really bring some educational resources and some research resources to the land of Africa or other similar areas. How can we enable them to learn this field faster, and not just Scroll, but the entire block? chain of education. Some projects have also done some strategies in Africa, but they basically just randomly throw money and issue a lot of Grants. But in fact, the development of such a community is very short-term, and it is not a very value-driven community. We hope that by doing something very right, we will bring these educational resources to Africa, understand the real local needs and then customize them to see what kind of help we can provide, rather than just throwing money. We really care about the people in these places and the communities in these places. This is something we care about very much. I think this is actually the same for many current applications. Many applications are deployed from one chain to another, and then to another chain. They will always be the same wave of airdrop hunters, or the same wave of Western users. If we can really diversify the entire user group, it will be a huge bonus for the application ecology on our chain. Being able to attract people from different places to try and experience your application is also a big direction we are thinking about.

zkEVM

FF: Thank you very much Ye for sharing so many insights about Africa with us. It sounds to me that developing countries are indeed a very big opportunity, because they have lost the existing infrastructure of our current generation. Instead, it will become a blank sheet of paper that can be directly applied by a lot of new infrastructure. into their actual lives. Then Scroll can also use this market to bring blockchain to the next billion users.

So we want to talk specifically about the zkEVM that Scroll is developing. The classification of zkEVM is also a common topic, but we all know that this is a question of performance and compatibility trade-offs. Then, Scroll has been working with the PSE team to build Type 1 zkEVM, which is the most compatible state of trade-off. What we want to ask is, with the development of zk technology, is it possible that this trade-off will be broken, or will everyone choose a direction with greater compatibility when performance is improved in the future.

Ye:First let’s talk about our technology stack. We have been building Type 1 zkEVM with the Ethereum Foundation’s ZK team, also known as the PSE team, since the beginning of 2021. Then about half of the contributions in the entire code base are on our side, half on the PSE side, and there may be some sporadic contributors from the community. Therefore, we have always been very supportive of this community open source development atmosphere, and have always insisted on contributing code to Ethereum. The purpose of the entire project is to build a Type 1 zkEVM, which can really be used in the Layer of Ethereum in the future. 1, can change the roadmap of Ethereum and build the future for Ethereum, not just for ourselves. This part is a community version of Type 1 zkEVM, developed by us, PSE, and other community contributors. Building together. So it’s not all our credit, but everyone’s credit.

Then for Scroll itself, we need a mainnet, a version with complete product functions and better auditing. And according to our current evaluation, the proof overhead of Type 1 is 10 times that of Type 2, so we believe that even if you want to build a Type 1 zkEVM, you need to transition from one stage to the other, and you need to test your architecture. , so we think the best way is to make a Type 2 version first, and then the entire architecture of Ethereum is constantly updated. When you have a Type 1 with sufficient performance, the architecture of Ethereum has changed. Then you may have to change it again. So we feel that the difference between Type 1 and Type 2 is mainly reflected in whether the storage shares the same set. So our current main focus is to make a Type 2 product ready. The current code base is derived from the community version we cooperate with. We changed its storage, designed some other corresponding modules, optimized the GPU prover, optimized many other things, and finally compressed the proof time to probably close to 10 minutes on our GPU prover. This is a very, very efficient zkEVM. But we will continue to help Ethereum build a Type 1 zkEVM to see how it can become more robust and build the future of Ethereum. So our mission is to build an efficient, product-available and fully auditable Type 2 zkEVM. At the same time, we are also building a Type 1 zkEVM for Ethereum.

Because we feel that there is still a big gap in performance and need actual testing, we still focus on a Type 2 zkEVM, and it will not affect any compatibility at all, because basically all contracts and all tools, such as Tools such as Foundry, Remix, Hardhat, etc. are fully compatible and do not require any plug-ins. Moreover, the test network or main network we will launch soon is a pre-compiled contract that supports Paring, etc., and the compatibility is very good. It is just that we feel that it will still take a long time to work hard to enter the next stage. At the same time, we believe that without proof, it cannot be considered a safe Layer 2, zkEVM or zkRollup. We adhere to the principle of security.

Our development status is basically perfect. All opcodes even push 0. We may be the only zkRollup that supports push 0. Not only zkRollup, we are the first among all Rollups to support push 0, and We are the only zkRollup that supports Paring and can verify Paring. This is our own development progress and compatibility, and our audit is already in progress. You can read our Sepolia blog.

And regarding the development direction of this compatibility by each company, my personal guess is that everyone will eventually develop in a direction with stronger compatibility, except for Starkware. Because it has Kakarot on it to support zkEVM, it will not consider the other direction. It should be the only way for the Cario language to go dark. Now that zkSync is online, the feedback we have received from developers is that it still needs a lot of code changes, and they still dont believe in its security. Because the most important thing for a contract is security. No matter how efficient you are, if the code needs to be changed or re-audited, the cost must be on the developers side. This is definitely not a very sustainable development direction. Therefore, we feel that it must be very compatible so that developers do not have to modify their own code. This is very, very important. I think everyone will still work hard in this direction.

But I dont think Type 1 is a very accurate way to generalize. Every time I just say that Ethereum is equivalent, EVM is equivalent, or it is compatible at the language level. Because I think this is a more intuitive and direct summary method, because it does not have an accurate definition. Vitalik only gave a vague classification method, which I think can be changed. I think it is difficult to define a companys vision based on your current stage. Just like our vision is that we will first launch our test network and main network in Type 2, and then conduct practical tests and performance tests, then consider other upgrades, and continue to help Ethereum build Type 1 zkEVM. This is our One way. Using Type 1 and Type 2 is very inaccurate because this is a very staged goal.

Generally speaking, I think if you want to make zkEVM, you must develop towards compatibility, because now it is proved that the technology has really improved a lot, and zkEVM can be made very, very fast, so I think there is no need to do it because of Two or three times the efficiency, sacrificing compatibility. Even I think whether it can be faster is a question, so I think it will still develop in the direction of better compatibility.

FF: I understand, thank you. Scroll has always been improving performance while maintaining compatibility. Other companies may choose different trade-offs, but everyone may eventually become Ethereum compatible, except Starkware. But from the users perspective, their more intuitive feelings should be mainly from two aspects, one is speed and the other is cost. In terms of speed and block generation time, Scroll has always been stable at about 3 seconds. Polygon used to be in the 10s, but recently it has dropped to more than 3 seconds. Linea is also currently around 3 seconds. I would like to ask, what determines the 3-second block time? Is it determined by the performance of the sorter? Because now everyone is a centralized sorter, the block time of the common Alt L1 will be shorter, and they are Go to the consensus level. Are there any considerations or bottlenecks in this area?

Ye:The 3 seconds we are designing now can actually be shortened even faster. We currently define it at 3 seconds, which is in a testing phase and is based on the size of our current prover. Because if you shrink it to a shorter length, you will need more provers to prove the block in time. If it is shortened too short, your data uploading may become a bigger bottleneck. Moreover, the current general real throughput of zkRollup is not as high as thousands, and its real demand may only be in the dozens. So in fact, it is not of much use if you produce blocks too quickly. The centralized sequencer can do it very quickly, but this is also an intermediate transition period, depending on the size of the prover and the upload on the chain. Data bottleneck.

In fact, you can estimate the maximum throughput, and then work backwards to determine how many seconds it takes to produce a block. How many seconds it takes to produce a block is also a trade-off. For example, it takes 3 seconds to produce a block with a space size of 10 million, and it takes 30 seconds to produce a block with a space size of 100 million. If you look at the block time alone, you can generate a block with only one transaction in a few milliseconds. For example, Arbitrum. Some time ago, I remember that there was only one transaction in a block, and the block generation speed was very fast. I dont know the current status. So I think a better way to look at the capacity of a block is Gas/s, which is how much Gas you can process per second. This is a better measurement method, and it is more scientific than throughput and block generation time. a little. This completely depends on whether your bottleneck is the chain or the prover. It is not just due to the efficiency of the centralized sequencer. It has the influence of many multiple factors.

The second is that when everyone decentralizes, it also depends on the decentralization plan. If you follow a consensus protocol, consensus will be divided into consensus that requires permission and consensus that does not require permission. For example, BFT may be faster, but it may be very slow if you want to take the longest chain. So this is the choice of Rollups respective concepts. Some are for faster final confirmation and user experience, while some may feel that it needs to be more decentralized and may give up the advantage of this part of the block time.

FF: I understand, thank you. In fact, the block generation time is still determined by the choices of the respective chains. In terms of cost, we also want to ask. Now that zkSync and Polygon zkEVM have been launched on the mainnet, we see that their gas fees are actually higher than OP Rollup. But zkSync and Polygon zkEVM may be two different situations. In zkSync, there are too many people interacting, so the gas price rises. In Polygon zkEVM, there are too few people interacting, so the transaction fee of L1 shared between them is high, so Naturally, the gas fee is high. Scroll currently has very low transaction fees on the test network. I would like to ask how it will consider solving these two problems after it is launched on the main network.

Ye:You can take a look at our Sepolia blog. We have described many optimizations we have done, and there are many technical summaries in it. We have made a lot of optimizations after the Goerli testnet. There is a picture on Sepolias blog that explains how we compress the proof. Specifically, we changed the previous block corresponding to a proof into a three-layer structure of Batch, Chunk, and Block. We can first aggregate the block proofs into a proof, and then aggregate this proof. We finally aggregated two major layers, and then there were two layers of aggregation inside the two major layers to compress the proof and verification. We have done a lot of work to reduce this verification cost. We are also looking at better algorithms for recursive proofs. Another thing is that by controlling the frequency, including the frequency of block generation and the frequency of data submission on the chain, we also made a lot of optimizations on the cross-chain bridge, and finally reduced the gas cost by 50%. This can also be found on the blog. Seen inside.

Then there will still be many optimization directions in the future. For example, we are currently studying how to upgrade our cross-chain bridge after EIP 4844 and use blobs to further reduce our gas fees. We will also have a blog post dedicated to this later. And I think there is a big direction. Why zkEVM is still more expensive than OP now? It is because all ZK projects are focusing on ZK and how to make zkEVM well. The technology itself is already very complex, so we have not yet reached the stage of in-depth optimization. But OP is different. They have been online for such a long time, so they must pay attention to costs. So I think the optimization team of ZK has just started to do it, such as on-chain data compression, reducing the data on the chain, but it does not mean putting the data in other places. Before, for example, we had to put the original data in On the main chain, but now we can put a compressed data, but this data can still be recovered. We just need to prove in the ZK circuit that this data is equivalent to the previously uncompressed data and then use it. Most of the current ZK team may not have done this. When this thing is completed, the cost may be further compressed very much, so there are still many opportunities to use some ZK-friendly compression algorithms to reduce this part. Keep costs down. It’s just that everyone is focusing on how to make zkEVM better and achieve higher performance, but the cost will become the next very important topic.

FF: Got it, thank you. I understand that one possibility is to reduce this cost through aggregate proof, and the other is to further compress the cost through some trade-offs at the DA layer in the future. Then I would like to ask Scroll whether it is possible for Scroll to adopt the L3 solution like Starknet in terms of cost and speed. The latest financing Kakarot is an EVM written in the Cario language. I would like to ask Ye, is it possible to build another EVM on Scroll, similar to the current L1-L2 architecture, or is it possible to write an EVM using a contract like Kakarot?

Ye:Regarding the topic of Layer 3, our current focus is on our own development, how to build an easy-to-use and complete system, rather than blindly pursuing these floating narratives. Because we feel that if you want to build Layer 3, you only need to fork Scroll and then deploy it on Scroll. Because our code is also open source, it is very convenient for everyone to fork and deploy. As I said before, we feel that everyone is telling the story of Layer 3, but to support Layer 3, especially Layer 3 based on SNARK, we need to support the Pairing precompiled contract. But now apart from us, we have not seen any other zkRollup that supports Pairing, a precompiled contract, and supports verification of Pairing. Its equivalent to you missing the most important link, but you still have to tell a further story. We want to do more than just tell a story.

In fact, you can see this by looking at which chain has more ZK applications. There are many native ZK applications on our chain because we support Pairing. The story you tell is one thing, but whether you can actually support it is another. In my opinion Scroll is easy to support, can be forked, deployed, and verified. Scrolls Layer 3 is easy to build, and only we can support SNARK-based ZK Layer 3.

Then the third point is that Kakarot is a very different design. Kakarot is an application on Starkware. It is not a Layer 3. It may develop to Layer 3 in the future, but it is currently a program written by Cario and uses Starknets own sorter. This is not like building another Layer 3 on Scroll, but more like writing an EVM using Solidity on Scroll, and then the user sends transactions to the EVM for execution.

FF: I understand, thank you. Ye, there is another narrative recently that zkSync has just launched their zk Stack, which can send L3 and L2 with one click. In the past, companies like Op and Arbitrum have launched their own solutions. I would like to ask Ye, what does Scroll think of the current RaaS and zkRaaS tracks, and whether Scroll will also launch its own set of solutions.

Ye:I personally feel that this is indeed not our focus now, and if everyone wants to use Scroll Stack or SDK, just Fork it directly. We don’t need to come up with a fashionable name so that everyone must use it. Its not a focus of ours right now, but its easy for everyone to use if they want to. This is the first reason.

And I personally think that, for example, OP Stack is the most popular right now. But some of the current debates are that the story told by this framework is that each module is very flexible, but how flexible is it actually? Can it support zero-knowledge proofs, Arbitrums proofs. This standard is very debatable, and it still takes a long time to test to determine what is a good technology stack standard. I believe that Scroll’s value is to build together with the community. If we want to build such a set of standards, it must be built through the community. For example, work with Arbitrum, Optimism, zkSync and Polygon to push a standard in advance. Only in this way can all Layer 2 be unified. Otherwise, Arbitrum will definitely not change its own Stack for OP Stack and make this interface compatible. Each company is still promoting its own Stack, which will never be compatible with each other. In this case, Stack is a fork of itself, rather than a truly flexible framework.

In addition, OP Stack is an incomplete framework. There is no proof in it. As the most critical link, it is missing. In this case, a large number of fork chains will feel that they are Layer 2, but no one can meet the Layer 2 security standards. Layer 2 is used because everyone believes in the security of Ethereum, but no Layer 2 can achieve the security of Ethereum because no Layer 2 has proof. I think this is widely promoted, but it is actually not a good thing for the encryption field. Everyone only values ​​narrative, not real security. I think we prove that the system is not yet mature, and we don’t want to use a lot of marketing methods to promote marketization and attract everyone’s funds. It is more appropriate to push it when we feel that the maturity of the framework has reached. This is the second reason why we dont make it a priority.

The third reason is that we have run a complete zkRollup ourselves. We know how complicated it is to run a set of these. You need to consider the upgradability of the contract, you need to consider the stability of the sequencer, you need to consider your own prover network and model, there are many, many such complex things. I think there are only a few teams that have the ability to run this Stack, or applications that need it. We dont think that most teams have the ability to run their own Rollup, and its not yet time to maintain such a system. If a Layer 2 is Rug Pulled later, or a major problem occurs, it will not be a good thing for the entire Layer 2 field. Generally speaking, promoting Layer 2 Stack is a good direction, but if an accident occurs and Layer 2 turns into a MEME, this is not what we want to see.

The last reason is the issue of interoperability. Different applications run their own chains, and the interaction between them is not so trustless. I think this is a big problem, splitting an otherwise interoperable system. Our current focus at Scroll is to build the default Layer 2 to attract the maximum network effect and capture some application scenarios with interoperability and security requirements. This is the most important thing for us at the moment, and we do not rule out the possibility of considering some related directions in the future. But it’s still a bit early to say this. We are still looking at its requirements and some unresolved issues about interoperability.

ZK hardware acceleration and prover network

FF: Got it, thank you. From what I hear, I feel that Scroll is using a very pragmatic approach to advance its Layer 2, and will not focus on telling a trending narrative. Well, you just mentioned that in fact, the current development focus of Layer 2 itself is to solve problems such as performance bottlenecks, rather than when it is necessary to talk about the framework now. We also know that Ye has previously published what can be said to be the most important paper in the field of hardware acceleration, PipeZK. So I believe Scroll should be far behind other competitors in terms of hardware acceleration. What I want to ask is, what is the latest progress of Scroll in terms of hardware acceleration? Can Ye disclose it? Including the current cooperation model and technology upgrades?

Ye:Let me add some background. We are the first team to study the direction of hardware acceleration. In addition to PipeZK, we also have GZKP. We have studied the acceleration of FPGA, ASIC and GPU, so we are very professional in the direction of hardware acceleration. And currently there are many cooperation partners, such as Cysic, who prefer to use ASIC to support the prover network. But we will not do IPGA and ASIC ourselves, because FPGA and ASIC are fields that require professional teams and professional skills. We have no way to have such a team internally to do such a thing. Our internal team is still focusing on a solution for GPU. We are writing some CUDA (Note: CUDA® is NVIDIAs general-purpose software specially designed for graphics processing units (GPUs). Parallel computing platform and programming model for computing development) to make a faster version of the GPU prover. We write software, and our original intention is to hope that more people can run our GPU code, rather than saying that we ourselves have to build a very powerful ASIC to monopolize this market. We are still encouraging this market not to become a zero-sum game, but a market of fair competition, where everyone competes to become the fastest and best-performing prover, including ASIC, FPGA, and GPU. Then we will release a version ourselves, and everyone can use our GPU algorithm to become one of our provers. And now our GPU performance has been optimized very, very well, and it can be about 10 times faster than the CPU prover. We are still constantly iterating its performance and thinking about the selection of the next generation of this proof system.

As for the specific cooperation model, we are still exploring it. Many hardware companies have already promised to accelerate our Stack. For us, we are very pleased to see the community doing this. ***We will tell them from a relatively neutral perspective how to run our benchmark tests and how to give them some education and help, rather than saying who we like will be the winner. We are now thinking about launching some prover competitions to encourage everyone to prove faster and faster. When our mainnet is first launched, we will not use a completely decentralized prover network because we feel that the entire system still needs to be tested on the mainnet again. Our testnet may have been running for more than half a year, but it still needs the mainnet. Only some tests will further decentralize it. We already have several good proposals to discuss how to decentralize our prover and sorter. We may first slowly decentralize it through some competitions. way to advance.

FF: Okay, thank you Ye. I just heard that Scroll may focus more on software optimization and cooperate with other companies on hardware. Regarding the decentralized certifier mentioned just now, everyone in the community is very concerned about Scrolls certifier network. In my impression, Scroll should be the first person to propose a decentralized prover. Because there are many idle GPUs that appeared after Ethereum POW was converted to POS, Scroll’s prover network is a big opportunity for both parties. Here, I would like to ask on behalf of the community, if you want to participate in Scroll’s prover network in the future, what are the special requirements for GPU, or what needs to be adapted. When will this be tested?

Ye:Our current prover requirements are a CPU plus two or four cards GPU. In fact, our requirements for the GPU are relatively low. It should be that a 1080 with 8 GB of memory can run our prover. However, the current requirements for the CPU are relatively high, which should still be more than 200 GB of CPU memory, so it is still a relatively expensive prover. At this point, no zkEVM can reduce its own CPU cost to less than 200 GB. This is the biggest problem, and it is also a direction we are looking at. Can we cut a zkEVM block into smaller blocks, and then perform segmentation proofs and so on.

FF: If developers and participants in the community want to test it, when can they test the prover?

Ye:Our CPU prover is completely open source, and you can run the CPU prover at any time. Then after the mainnet goes online, we will continue to optimize the prover version of our GPU to adapt to more different GPU models. So I think maybe at that point in time, you can try to run the prover yourself. But if it is really open and everyone can connect to the certifier network, I think it will still take a while, because you need a system that supports the network. Our current design of the entire system is decentralized, but we need to design specific incentive models, punishment models, and other things. So it will take some time to actually connect to the network, but if you just want to run it, you can run it now. You can run your own prover, because the code of the prover is open source.

FF: Okay, thank you Ye. I am also looking forward to the launch of the Future Prover Network. When talking about the decentralization of the prover network, it actually involves a lot of coordination issues. Now, does Scroll have any alternative solutions that it can disclose? Because we have researched and looked at other chains, including Polygon, they should adopt a POE (Proof of Efficiency) solution, which is similar to the proof of submission without permission. The fastest prover wins. Is there any solution similar to the proof market like Mina and Nil? Does Scroll have any innovative solutions of its own in this regard?

Ye:If our entire network is now decentralized, it is relatively certain that we will have a Prover Sequencer Separation (PSS) solution. Ethereum Layer 1 is called Proposer Builder Separation (PBS). We will definitely say that the sorter and the prover will be two separate roles, but the specific design of both sides will be a long-term problem, because if you design the prover and then design the sorter, you will find that has a problem. So in the long run, the prover may affect the sequencer, and the sequencer may affect the prover. It involves the incentive model of how much of the transaction fee is reserved for the sequencer and how much is reserved for the prover. We have many plans, but we haven’t decided which one to use yet.

Our current philosophy is to avoid the fastest prover and always win, because if your system relies on the fastest prover, then others in the community ran a prover and found that it could not run the fastest prover. , it may go without rewards for a long time, and the entire system will rely on the fastest prover. When you rely on the fastest prover, it may not have the incentive to continue to upgrade its prover. When it goes away, your system has a single point of failure, so we try to avoid that as much as possible. *** But as for the specific design, we will slowly make various proposals public in the future, and then we will choose one from them. We will also discuss with the community and listen to the community’s opinions, but it is still relatively early.

FF: I understand. It sounds like the Scroll solution may be more inclined to avoid the fastest prover. I hope that there will be a state of free competition among the provers, which will encourage the long-term development of this network.

Ye:Yes, not the fastest prover always wins.

proof system

FF: I think this is the best way to allow more community participants to come in. Next, let’s talk about the proof system. In fact, when everyone thinks of ZK, they still think of two types of proof systems, one is STARK and the other is SNARK. Ye, you also said in your speech before that the current proof system is also a modular trend. Does that mean that this classification is no longer applicable? When we talk about the proof system later, we have to use it according to its front and back ends. What components to divide. Then when it comes to STARK, it is no longer a proof system unique to Starkware. I have the impression that STARK is resistant to quantum attacks. In the future, SNARK may also have such characteristics.

Ye:Yes, I think the difference between SNARK and STARK is indeed very small. It is just a polynomial commitment (Polynomial Commitment) component. This component unique to STARK is called FRI. SNARK is now also quantum resistant. If you think this is important, for example, Plonky 2 and the like are quantum-resistant. The Halo 2 proof system we use is also quantum-resistant if it is replaced by FRI. So I think this is not a particularly obvious distinction, and I think anti-quantum is not the biggest consideration at the moment. People don’t use FRI to be quantum-resistant, mostly because it generates proofs faster, and efficiency may be more important for the proof system. FRI is indeed a very important direction in the future. It can make your Proof very fast, but at the same time, its verification cost is also very high, so you need to constantly recurse to reduce the cost. This is also a direction we are exploring.

The proof system is indeed very modular. What we hope to promote is a community standard so that everyone can use the same proof system. Then this framework can support FRI, STARK, and SNARK. This is what we hope to see.

FF: Got it, thank you. We may ask for more details here. Scroll’s current two-layer proof system uses Halo 2. What we are curious about is that from the perspective of trade-offs, there is no optimal, only the most suitable. Does this mean that Halo 2 is the most suitable proof system for zkEVM.

Ye:I don’t think so, because Halo 2 is actually a code framework. In fact, when we use Halo 2, we use Halo 2 more as a modular framework to prove the system. In Halo 2, you can add KZG to become PLONK, add FRI to become STARK, and add various components to become various new proof systems. This is an explanation of Halo 2. To be specific about a certain proof system, many other descriptive expressions need to be added.

For zkEVM, there are many promising directions now. One like Halo 2 plus KZG we are using now is the most traditional, and the security model is also time-tested. Another direction is to add Halo 2 and FRI, or use Plonky 2 or STARK to do zkEVM, which is also very efficient. According to Polygons data, their proof system is also very efficient, so we are also looking in this direction. The main difference is that it does not rely on elliptic curves, which can save a lot of calculations in the finite field of elliptic curves. This is an important reason why it is very fast. Also for FRI, they can use Goldilocks (64-bit) small domains to make zkEVM faster.

Another big direction is Folding, such as Hypernova, Supernova, Paranova and other various Nova proof systems. In principle, FRI is faster than Plonk mainly because its domain is different, and its finite domain representation elements will be smaller and faster. Then the main principle of Folding is that when you need to prove 100 identical programs, if you use other proving programs, you may need to generate 100 proofs, or put 100 programs together and generate one big proof. After using Folding, you can fold these 100 programs together at a very small cost, and then only prove the last folded one. So it can reduce part of the cost required by the prover, which is one reason why it is very fast. This direction is also very promising, but there are still many unresolved problems. For example, there is no very mature development framework for lookup tables between different programs, allowing everyone to develop programs based on NOVA. Everyone is still observing whether it is applicable and how efficient it is. I think it is a relatively high possibility. It is very efficient for proving many repetitive circuits such as Keccak and ECDSA. This is a good direction, such as replacing parts with Folding or FRI in stages, to replace the performance-critical parts as much as possible. But there are many problems, such as you still have to connect with the remaining parts, and you also have to consider the audit security of the entire system, etc.

So I think this is still a direction that needs careful comparison. We have done a lot of internal benchmark testing to find out how to build a system that can compare Folding and FRI more fairly. We have done a lot of work on this matter, there will be a lot of benchmark test results, and there will also be articles to discuss our conclusions. Why do we feel that the next generation of proof systems needs to evolve in this direction.

FF: I would like to ask if Folding is similar to using recursion to perform a layer of aggregation on the circuit. Then the proof aggregation we talked about before refers to doing some aggregation at the level of proof.

Ye:Yes, this is a general idea. But it is still very different from recursion, because it can really linearly combine the things to be proved, and then prove it only once. The results look somewhat similar, but in reality there are still many differences.

In terms of details, let’s give an example that is not particularly vivid. For example, if you have 100 assignments to write now, the traditional proof method is to write them one by one. Recursion is a bit like having 100 people, each writing a copy, and finally figuring out how to integrate them together. Folding is more like a person taking a long pen with 100 contacts, and he writes 100 in one stroke. It feels a bit like this. He had a bit of a lazy idea, and then compressed the tasks together and only wrote them once.

For recursion, he does not reduce the workload. He still has to write, but he just finds a way to put them together in the end. For example, after 100 people finish writing, they are stacked together, and then the teacher only approves them once or something like that.

FF: Then Folding might be like the way we used to use carbon paper with 100 sheets underneath to write all the homework.

Ye:Yes, it feels that way, but its not that magical. It still has some costs. For example, when you write 100 sheets of carbon paper, the mark at the bottom will be lighter.

FF: Then we should expect some upgrades to the Scroll future-proofing system. When it comes to upgrading the proof system, let’s look at the previous architecture. The Geth client will submit the execution trace to the prover. This part does not seem to have a strong binding relationship, so the proof system is also similar to a component. Upgrading the certification system is similar to upgrading components.

Ye:Yes.

FF: Another trend is the lookup singularity proposed by Barry Whitehat last year, including the recent launch of Lasso and Jolt by a16z, which are also very large optimizations and upgrades for Lookup. I would like to ask what Ye thinks of this trend?

Ye:I think this is also a promising direction. Their core idea is to make some very large lookup tables. For example, the previous lookup table might be 2 to the power of more than ten or twenty times. Now they can do a lookup table of 2 to the power of a hundred times. I think this is a very interesting direction, but how to build a circuit using only lookup tables is quite challenging. Their idea is to make lookup tables very cheap, so that in the future everyone can prove various constraints using just lookup tables. But in fact, because of the previous lookup tables Caulk, Baloo, and Cq, most of them can only prove fixed lookup tables, but cannot prove dynamic lookup tables. I havent looked specifically to see if their new architecture can support dynamic lookup tables. If it can support dynamic lookup tables, it is a very good and powerful design that can be applied to zkEVM. So I think we still need to observe it for another month or two to see how many circuit parts can be replaced by lookup tables, and then see its efficiency. We have already begun to look at this direction. We just shared the lookup table paper this morning, but the zkEVM paper has not yet been shared. It is estimated that we may have some conclusions of our own in the next two weeks.

Scroll’s Vision and Values

FF: Okay, thank you Ye for sharing so much recent progress in proof systems with us. Then the theme of our Talk this time is Scroll and ZK. Currently, the mainnet launch time disclosed by Scroll should be Q3 and Q4, so we most hope that Ye will make a good expectation. In the foreseeable future, you hope What ideal state can Scroll and ZK develop to.

Ye:We estimate it will be in Q3~Q4 (Note:Actual online time: 2023.10.17) will be launched on the mainnet. Our vision is to allow the next billion users to enter the Ethereum ecosystem through Scroll. We have also been adhering to the principle of open source, co-building with the community, and maintaining neutrality. We would rather believe that Layer 2 is an expansion technology. It is not only about technical expansion, inheriting the security of Ethereum, and increasing TPS, but more importantly, it is about inheriting the good qualities of Ethereum. For example, Ethereum believes in decentralization and neutrality. There are many things that Ethereum will not do, and Ethereum will not engage in activities to support everyone in doing crazy things, but Layer 2 has done a lot of things now, such as very crazy marketing promotions.

So I hope that Scroll can always adhere to its own beliefs and values ​​in its future development. We won’t do anything that Ethereum can’t do. We hope to become everyone’s default Ethereum expansion layer. Now all Layer 2 is actually developing in a direction that is not consistent with Ethereum. They will have their own market strategies and their own launch goals, and we hope to be the only remaining Layer 2 that is highly consistent with Ethereum. Because we feel

Layer 2
zkSync
technology
Welcome to Join Odaily Official Community