This article comes from:Reforge Research
Compiled by: Odaily Wenser
Editors note: Ethereum has long been criticized for its high gas costs and security risks. In addition, the recent discussion of parallel EVM has attracted great attention in the industry. Reforge Research conducted in-depth exchanges with many senior industry insiders such as EVM L1 ecological network, AMM industry and cross-chain protocols to understand the different views of different ecosystems on this topic. Odaily This article has been compiled for your reference and study.
introduce
In todays computer systems, getting tasks processed faster and more efficiently often means processing them in parallel rather than sequentially. This phenomenon spawned by the multi-core processor architecture of modern computers is called parallelization as the name suggests. Tasks that were traditionally handled in steps are now often handled simultaneously, maximizing processor performance. same,In a blockchain network, thisThe principle of performing multiple operations simultaneously also applies to trading operations, although rather than utilizing multiple processors to operate, it utilizes the collective verification capabilities of many nodes in the network. Some early examples include:
In 2015, Nano (XNO) adopted a block lattice structure, allowing each account to have its own blockchain to enable parallel processing and remove the need for network-wide transaction confirmations.
In 2018, the blockchain networkBlock-STM (Software Transaction Memory)The parallel execution engine paper was officially published, Polkdot achieved parallelization through a multi-chain architecture, and EOS launched their multi-threaded processing engine.
In 2020, Avalanche introduced a parallel processing mechanism for its consensus layer (rather than the serialized EVM c-chain), and Solana added similar innovations to Sealevel.
For EVM, since its inception, transactions and smart contract execution have been processed sequentially. This single-threaded execution design limits the throughput and scalability of the overall system, a flaw that is particularly evident when network demands are overloaded. As network nodes face increasingly heavy workloads, blockchain networks will inevitably slow down and users will face higher costs. In order to prioritize transactions in a crowded network environment, they will have to provide more High bid.
fromEIP proposal proposed by Vitalik in 2017The Ethereum community has been exploring parallel processing as a solution since. The original intention was to achieve parallelization through traditional sharded blockchains or sharding. However, the rapid development and adoption of L2 Rollup, which offers simpler and more immediate scalability benefits, shifted Ethereum’s development focus from sharding to what is now called “danksharding”. With danksharding, shards serve primarily as a data availability layer rather than executing transactions in parallel. However, since danksharding has not yet been fully implemented, attention has turned to several key alternative parallelized L1 networks with EVM compatibility (notably Monad, Neon EVM, and Sei).
Given the legacy of software systems engineering and the success of other network scalability, parallel advances in EVM are inevitable. We look forward to this transition with unwavering faith, and the future direction, while unclear, is full of hope. This will have a huge impact on the worlds largest smart contract developer ecosystem (currently with over $80 billion in TVL). What happens when gas costs are reduced to a fraction of a cent by optimizing state access? For application layer developers, how broad is the design space? Heres our take on whats possible next in a post-parallel EVM world.
Parallelization is a means, not an end
Scaling blockchain is a multi-dimensional problem, and parallel execution paves the way for the development of more critical infrastructure, such as blockchain state storage.
A major challenge for projects running on parallel EVMs is not only enabling computations to occur simultaneously, but also ensuring that optimal state access and modification can be achieved in a parallelized environment. The crux of the matter lies in two main issues:
Ethereum clients and Ethereum itself use different storage data structures (B-trees/LSM trees vs. Merkle trees), resulting in poor performance when embedding one data structure into another.
In parallel execution, the capability of asynchronous input/output (referred to as asynchronous I/O) for transaction reads and updates is crucial; the operation processes are stuck waiting for each others responses, wasting all speed gains.
The additional computational tasks of adding a large number of extra SHA-3 hashes or calculations are minor compared to the cost of retrieving or setting the stored value. In order to reduce transaction processing time and gas costs, the infrastructure of the database itself must be improved. Its not just a matter of adopting traditional database architectures as alternatives to raw key-value stores, such as SQL databases. Implementing EVM state using a relational model adds unnecessary complexity and overhead compared to using a basic key-value store, resulting in loadandstorageThe cost of operation is higher. EVM state does not require features like sorting, range scans, or interactive semantics because it only performs point-read and point-write operations, and writes occur individually at the end of each block. In turn, the need for these improvements should focus on addressing key considerations such as scalability, low-latency reads and writes, efficient concurrency control, state pruning and archiving, and seamless integration with the EVM. For example, Monad is building a custom state database from scratch called MonadDB. It will leverage the latest kernel support for asynchronous operations while implementing the Merkle tree data structure natively on disk and in memory.
We expect to see further improvements to the underlying key-value database as well as significant improvements to the third-party infrastructure that supports much of the blockchain’s storage capabilities.
Make pCLOBs Great Again
As DeFi transitions to a higher fidelity state, CLOBs (centered limit order books) will become the primary design method for trading.
Since their debut in 2017, automated market makers (AMMs) have become a cornerstone of DeFi by providing operational simplicity and the unique ability to channel liquidity. By leveraging liquidity pools and pricing algorithms, AMMs have revolutionized DeFi, becoming the best alternative to traditional trading systems such as order books. Despite being a fundamental building block of traditional finance, when central limit order books (CLOBs) were introduced to Ethereum, the mechanism was limited by the scalability of the blockchain. They require a large number of transactions, as each order submission, execution, cancellation or modification requires a new on-chain transaction. Because Ethereum’s scalability efforts were immature, the cost of this requirement made CLOBs unsuitable in the early stages of DeFi, leading to the failure of early attempts (such as EtherDelta). However, even though AMMs are popular, they face their own inherent limitations. As DeFi matures and attracts more and more experienced traders and established institutions, these shortcomings become more obvious.
After recognizing the superiority of CLOBs, attempts to incorporate CLOBs-based exchanges into DeFi began to become more common on alternative, more scalable blockchain networks. Protocols such as Kujira, Serum (RIP, the project is offline), Demex, dYdX, Dexalot, and most recently Aori and Hyperliquid aim to provide a better on-chain trading experience than competitors such as AMM. However, with the exception of projects targeting specific niches (such as dYdX and Hyperliquid for perpetual contracts), CLOBs on these alternative networks face a set of challenges beyond scalability:
Liquidity fragmentation: The network effects enabled by the highly composable and seamlessly integrated DeFi protocols on Ethereum make it difficult for CLOBs on other chains to attract sufficient liquidity and trading volume, thus affecting their further adoption and usability.
Memecoins: Directing liquidity in on-chain CLOBs requires setting limit orders, which is a more challenging chicken-and-egg problem given that new assets like Memecoins are relatively unknown.
CLOBs with blobs
How does L2 perform?
Compared with the Ethereum mainnet, the existing Ethereum L2 has significant improvements in transaction throughput and gas fees, especially in the recentDencun hard fork (Cancun upgrade)after. By replacing gas-intensive call data with lightweight binary large objects (blobs), gas costs are significantly reduced.
According to data from growthepie, as of April 1, the gas costs of the Arbitrum and OP networks were US$0.028 and US$0.064 respectively, with the Mantle network being the cheapest at only US$0.015. This is a far cry from the gas costs before the Cancun upgrade, as calling data charges previously accounted for 70% -90% of gas costs. Unfortunately, this isnt cheap enough, and the origination/cancellation fee of around $0.01 is still a bit steep.
For example, institutional traders and market makers place large orders relative to the number of trades that are actually executed and therefore typically have a high order-to-trade ratio. Even at todays L2 fee pricing, paying order submission fees and subsequently modifying or canceling those orders on the order book can have a significant impact on the profitability and strategic decisions of institutional players. Imagine the following example:
Company A: 10,000 order submissions, 1,000 trades, and 9,000 cancellations or modifications per hour is a relatively standard benchmark. If the company operates on 100 order books throughout the day, even if one trade costs less than $0.01, the overall operation will easily cost more than $150,000.
New solution: The pCLOB
With the emergence of parallel EVMs, we expect a surge in DeFi activity thanks to the feasibility of CLOBs leading on-chain. But it’s not just CLOBs – programmable central limit order books (pCLOBs for short). Given the inherent composability of DeFi, we can interact with countless protocols (limited only by gas) to create a large number of trading pairs. Leveraging this principle, pCLOB enables embedded custom logic during the order submission process. This logic can be called before or after the order is submitted. For example, a pCLOB smart contract can contain custom logic to implement:
- Validate order parameters (e.g. price and quantity) based on predefined rules or market conditions
-Perform real-time risk checks (e.g. ensuring adequate margin or collateral for leveraged trades)
-Apply dynamic fee calculations based on arbitrary parameters (e.g. order type, trading volume, market volatility, etc.)
- Execute conditional orders based on specified trigger conditions
…and still be a better deal than existing deal designs.
Just-in-time (JIT)The concept of illustrating this well. Liquidity does not sit idle on any single exchange, but yields are generated elsewhere until the moment an order is matched and liquidity is withdrawn from the underlying platform. Who doesn’t want to earn the last bit of profit on MakerDAO before accessing trading liquidity? The innovative “quote-as-code” approach enabled by Mangrove Exchange hints at the potential of this mechanism. When a quote in the order book is matched, the portion of code embedded within it is executed and its only task is to find the liquidity requested by the order taker. Still, challenges related to L2 scalability and cost remain.
Parallel EVM also fundamentally enhances the matching engine for pCLOBs. pCLOB can now implement a parallel matching engine that utilizes multiple “channels” to simultaneously process incoming orders and perform matching calculations. Each channel can process a subset of the order book, so price-time priority is not restricted and will only be executed when a match is found. Reduced latency between order submission, execution and modification makes order book updates more efficient.
Due to their ability to continuously make markets in illiquid conditions, AMMs are likely to continue to be widely used in long-tail assets; however, for blue chip assets, pCLOBs will dominate.
——Keone, co-founder and CEO of Monad
Keone, co-founder and CEO of Monad, said in a discussion with us that he believes we can expect the emergence of multiple pCLOBs in different high-throughput ecosystems. Keone emphasized that these pCLOBs will have a significant impact on the larger DeFi ecosystem due to lower operating fees.
Even with just a few of these improvements, we expect pCLOBs to have a significant impact in improving capital efficiency and unlocking new categories in DeFi.
Got it, we need more apps, but first...
Existing and new applications need to be architected in a way that takes full advantage of the underlying parallelism.
With the exception of pCLOBs, current decentralized applications are not parallel – their interactions with the blockchain are sequential in nature. However, history shows that technologies and applications naturally leverage new advancements to drive their own growth, even if they were not originally designed with these factors in mind.
“When the first iPhone launched, the apps designed for it looked a lot like bad computer apps. It’s the same story here. Just like we’re adding multi-cores to blockchain, that will lead to better apps .”
—— Steven Landers, blockchain architect of Sei Ecosystem, said this.
From being displayed as a magazine catalog on the Internet to the existence of a strong two-sided market, the development of e-commerce is a typical example. As parallel EVMs become a reality, we will witness a similar shift in decentralized applications. This highlights a key limitation:design timeApplications that do not take parallelism into account will not benefit from the efficiency gains of parallel EVM.Therefore, it is not enough to just have parallelism in the infrastructure layer without redesigning the application layer, they must be architecturally consistent.
state contention
Without making any changes to the application itself, wed still expect a 2-4x improvement in performance, but why stop there when it can break through again? This shift poses a key challenge: Applications need to be fundamentally redesigned to accommodate the nuances of parallel processing.
If you want to take advantage of throughput, you need to limit contention between transactions.
——Said Steven Landers, blockchain architect of Sei Ecosystem.
More specifically, conflicts can arise between multiple transactions in a decentralized application when they attempt to modify the same state at the same time. Resolving transaction conflicts requires processing them sequentially, which negates the benefits of parallelization.
There are many ways to resolve this conflict, which we wont go into in detail at this time, but the number of potential conflicts encountered during implementation depends heavily on the application developer. Looking at decentralized applications, even the most popular protocols like Uniswap did not take this limitation into consideration during the initial design and implementation process. 0x Taker, co-founder of Aori, a high-frequency off-chain order book system for market makers, spoke with us in depth about the major state controversies that will occur in the parallelized world. For AMM, due to its peer-to-pool model, many traders may conduct trading operations for a single pool at the same time. From a few transactions to hundreds of transactions, these operations will all compete for transaction priority, so AMM designers will have to carefully consider how liquidity is allocated and managed to maximize the benefits of the liquidity pool.
Steven, a core developer of the parallel EVM L1 network Sei ecosystem, emphasized the importance of considering state contention in multi-threaded development and pointed out that Sei is actively researching what parallelization means and how to ensure that resources are fully utilized.
Performance predictability
Yilong, co-founder and CEO of MegaETH, also emphasized to us the importance of decentralized applications seeking performance predictability.
Performance predictability means that a decentralized application is always able to execute transactions within a certain period of time, regardless of network congestion or other factors. One way to achieve this is through application-specific chains, however, while application-specific chains provide predictable performance, they sacrifice composability.
Parallelization provides a way to experiment with local fee markets to minimize state contention.
said 0x Taker, co-founder of Aori.
Additionally, advanced parallelism and multi-dimensional charging mechanisms can enable a single blockchain to provide more deterministic performance for each application while maintaining overall composability.
Solana has a greatLocalized charging market system, so if multiple users access the same state, they are charged a higher fee (peak pricing), rather than bidding against each other in a global fee market. This approach is particularly beneficial for loosely connected protocols that require performance predictability and composability.
To understand the concept, think of it as a highway system with multiple lanes and dynamic tolling. During peak hours, highways can allocate dedicated express lanes to vehicles willing to pay higher tolls. These express lanes ensure predictable and faster travel times for those who prioritize speed and are willing to pay a premium. At the same time, the general lanes are open to all vehicles, maintaining the overall connectivity of the highway system.
Various imaginations of possibilities
While the need to re-architect protocols to align with underlying parallelism may seem extremely challenging, the possible design space for DeFi and other verticals will expand significantly. We can expect to see a new generation of more complex and efficient applications focused on solving use cases that were previously impractical due to performance limitations.
Back in 1995, the only internet plan was to pay $0.10 per 1 MB of data downloaded - you would choose carefully which website to go to. Just imagine the changes from that time to the infinite, notice how people would handle that and What will become possible.”
said Keone Hon, co-founder and CEO of Monad.
It is possible that we could return to a scenario similar to the early days of centralized exchanges – a user acquisition war where DeFi applications, especially decentralized exchanges, offer referral programs (e.g. points, airdrops) and superior user experience as weapons. We could see an on-chain gaming world where any reasonable interactivity would exist, and it would be very different. Hybrid order book-AMMs already exist, but instead of setting up the CLOB sequencer as an independent node off-chain and then decentralizing it through governance, we can move it on-chain, thereby making it more decentralized and reducing latency, and enhance its composability. Social interaction entirely on-chain is now also possible. Frankly, any scenario with a large number of actors or agents operating simultaneously can now be brought to light and discussed.
In addition to humans, intelligent agents will likely dominate on-chain transaction flow even more than they do now.As a player in this game, AI that plays the role of arbitrage robots and has the ability to execute transactions autonomously has been around for a long time, however, their participation will grow exponentially in the future. Our view is that any form of on-chain participation will be augmented by AI to some extent. Latency requirements for agency transactions will be more important than we imagine today.
Ultimately, technological progress is just the fundamental enabler. Ultimately, the winner will come down to who can attract users and channel volume/liquidity better than their competitors. The difference is that now developers need to do more.
Crypto App User Experience Sucks…Now, It’s About to Get Better
User experience consistency (UXU) is not only possible, it’s necessary – and the industry is definitely moving towards making it a reality.
Thank you, GPT Man
Todays blockchain user experience is fragmented and cumbersome - users need to jump between multiple blockchains, wallets and protocols, waiting for transactions to complete, while facing the risk of security breaches or hackers. The ideal future is one where users can seamlessly interact with their assets securely without having to worry about the underlying blockchain infrastructure. This process of transitioning from the current fragmented user experience to a unified and simplified experience is what we call User Experience Unification (UXU).
Essentially, improving blockchain performance, especially through lower latency and lower fees, can significantly solve user experience issues.Historically, advances in performance have tended to positively impact every aspect of our digital user experience.For example, faster internet speeds not only enable seamless online interactions, but also create demand for richer and more immersive digital content. The advent of broadband and fiber optic technologies has enabled low-latency streaming of high-definition video and real-time online gaming, raising user expectations for digital platforms. This growing need for depth and quality drives many companies to continue innovating in the development of the next big, engaging thing—from advanced interactive web content to sophisticated cloud-based services to virtual/augmented Realistic experience. Increased network speeds not only improve the online experience itself, but also further expand the scope of user needs.
Likewise, improvements in blockchain performance will not only directly enhance the user experience by reducing latency, but will also indirectly contribute to the rise of protocols that unify and enhance the overall user experience. Performance is a key factor in their existence. In particular, networks such as parallel EVM have better performance and lower gas costs. For users, this means smoother on-chain operations, which can attract more developers to build an ecosystem. In our conversation with Sergey, co-founder of cross-chain interoperability network Axelar, he envisions a world that is both interoperable and symbiotic.
If you have complex logic that needs to be implemented on a high-throughput chain (i.e., parallel EVM), and given the high performance of the chain itself, it can absorb that logic and throughput needs, then you can use interoperability The operational solution exports that functionality to other chains in an efficient way.”
——Axelar co-founder Sergey Gorbunov said.
As scalability issues are resolved and interoperability between different ecosystems increases, we will see the emergence of protocols that bring the Web3 user experience on par with Web2. For example, includev2 version of intent-based protocol, advanced RPC infrastructure, chain abstraction support, and open computing infrastructure enhanced by artificial intelligence.
“As the throughput network increases, the orchestration of state by our nodes will accelerate because the solvers can understand our intentions very quickly.”
——Felix Madutsa, co-founder of Orb Labs
Possibly Prosperous Stars of Tomorrow
As performance requirements increase, the oracle market will become extremely prosperous.
Parallel EVM means increased performance requirements for oracles, which have been an extremely underdeveloped vertical over the past few years. Strong demand from the application layer will revitalize this industry, which is plagued by poor performance andPoor securityThere is an untapped market for products, which is crucial to improving the composability of DeFi. For example, market depth and trading volume are powerful indicators for many DeFi pioneers. We expect that big players like Chainlink and Pyth will adapt quickly as new players challenge their market share. After a conversation with a senior member of Chainlink, our thoughts are aligned: “The consensus [within Chainlink] is that if parallel EVMs gain dominance, we may want to redesign our smart contracts to capture value from them ( For example, reducing dependencies between contracts so that transactions/calls do not unnecessarily depend on execution and thus be attacked by MEV) But since parallel EVM aims to improve the transparency and throughput of applications already running on EVM, it It should not affect network stability.
This shows that Chainlink understands the impact of parallel execution on their product, and as mentioned earlier, in order to take advantage of parallelization, they will have to redesign their smart contracts.
This is not an exclusive party for L1. Parallel EVM L2 also wants to participate.
From a technical perspective, it is easier to create a high-performance parallel EVM L2 solution than to develop L1. This is because, in an L2 network, the sequencer setup is simpler than the consensus-based mechanism used in traditional L1 systems (such as Tendermint and its variants). This simplicity results from the fact that the sequencer in a parallel EVM L2 setup only needs to maintain the order of transactions, rather than the need for many nodes to agree on the order of transactions as in consensus-based L1 systems.
More specifically, we expect that in the short term, OP network-based parallel EVM L2 will dominate compared to the ZK series. Ultimately, we are extremely excited to see the transition from OP-based Rollups to ZK-Rollups through the transition to a general-purpose ZK framework like RISC 0, rather than the traditional approach used in other ZK-Rollups. Its just a matter of time.
Are the advantages of the Rust language still there?
Programming language choice will play an important role in the development of these systems. We prefer Ethereums Rust implementation, Reth, to other alternatives. This preference is not arbitrary, as Rust has many advantages over other languages, including memory safety without garbage collection, zero-cost abstraction, and a rich type system.
Rust Yes!
As you and I can see, the competition between Rust and C++ is becoming an important competition among the new generation of blockchain development languages. Although this competition is often overlooked, it should not be. The choice of development language is crucial because it affects the efficiency, security, and flexibility with which developers build systems.
Developers are the implementers of these systems, and their preferences and expertise are critical to the direction of the industry. We firmly believe that Rust will eventually come out on top. However, porting a completed application to another is far from easy. This requires significant resources, time and expertise, which further highlights the importance of choosing the right development language from the start.
In the context of parallel execution, we cannot fail to mention the Move language.
While Rust and C++ are often the focus of discussion, the Move language has some features that make it equally suitable in this case.
Move introduces the concept of resources that can only be created, moved, or destroyed but not copied. This ensures that resources are always uniquely owned, preventing common problems that can arise in parallel execution, such as race conditions and data races.
Formal verification and static typing: Move is a statically typed language with a strong focus on safety. It includes features such as type inference, ownership tracking, and overflow checking to help prevent common programming errors and vulnerabilities. These safety features are particularly important in the context of parallel execution, where errors may be more difficult to detect and reproduce. The languages semantics and type system are based on linear logic, similar to Rust and Haskell, which makes it easier to reason about the correctness of Move programs, so formal verification can help ensure that parallel execution is safe and correct.
Move advocates a modular design approach, where smart contracts are composed of smaller, reusable modules. This modular structure makes it easier to reason about the behavior of individual components and can facilitate parallel execution by allowing different modules to execute simultaneously.
Future considerations: EVM should eradicate its insecurity
While we paint an incredibly optimistic picture of an on-chain universe post-parallel EVM, it all means nothing if the flaws in EVM and smart contract security are not addressed.
andNetwork Economics and Consensus SecurityThe difference is that hackers exploited smart contract security vulnerabilities in the Ethereum DeFi protocol.More than $1.3 billion stolen in 2023 alone. Therefore, users are more inclined to use CEXs (centralized exchanges) like walled gardens or decentralized protocols that mix centralized nodes - sacrificing decentralization in order to improve the on-chain experience and choosing to be Consider a more secure (and performant) centralized experience.
The question is, will the average user care about decentralization?
The lack of inherent security features in the EVM design is the root cause of these vulnerabilities.
Similar to the aerospace industry, strict safety standards make air travel very safe, but the blockchain world’s approach to security is in stark contrast. Just as people value their lives above all else, the security of their financial assets is equally critical. Key practices such as exhaustive testing, redundancy, fault tolerance and strict development standards underpin the aviation safety record, but these key features are currently missing from EVMs and, in most cases, other virtual machine systems.
One potential solution is to adopt a dual virtual machine setup, where a separate virtual machine (e.g.CosmWasm) is used to monitor the real-time execution of EVM smart contracts, just like the function of anti-virus software in the operating system. This structure supports advanced checks such ascall stack inspection, specifically designed to reduce hacking incidents. However, this approach would require significant upgrades to existing blockchain systems. We expect newer and better solutions, like Arbitrum Stylus and Artela, to implement this architecture from the start.
Existing security mechanisms on the market tend to be reactive, responding to incoming or attempted threats by checking memory pools or smart contract code audits/reviews. While these mechanisms are helpful, they fail to address potential vulnerabilities in virtual machine designs, so a more productive and proactive approach is necessary to improve and enhance the security of blockchain networks and their application layers.
We advocate for a fundamental overhaul of blockchain VM architecture to embed real-time protection and other critical security features, possibly through dual VM setups that have been successfully proven in industries such as aerospace. Going forward, we strongly support infrastructure improvements that emphasize a preventive approach to ensure that advances in security match industry advances in performance (i.e., parallel EVM).
in conclusion
The emergence of parallel EVM is an important turning point in the evolution of blockchain technology. By enabling simultaneous execution of transactions and optimizing state access, parallel EVM opens a new era of possibilities for decentralized applications. From the resurgence of programmable CLOBs to the emergence of more complex and performant applications, parallel EVM has laid the foundation for a unified and user-friendly blockchain ecosystem.
As the industry embraces this paradigm shift, we can expect a wave of innovation that will push the boundaries of decentralized technology. Ultimately, the success of this transformation will depend on the ability of developers, infrastructure providers and the broader community to adapt and follow the principles of parallel execution, leading to a new future where technology is seamlessly integrated into our daily lives.
The emergence of parallel EVM has the potential to reshape the landscape of decentralized applications and user experience. By solving the scalability and performance limitations that have long hindered the growth of key verticals such as DeFi, Parallel EVM opens up the possibility of a future where complex high-throughput applications can develop without sacrificing the triple dilemma.
Realizing this vision will require more than just infrastructure advancements. Developers must also fundamentally rethink the architecture of their applications to align with the principles of parallel processing, minimize state contention, and maximize Performance predictability. Even so, while there is a bright future ahead, we must emphasize that security must be prioritized as much as scalability.
