Original Author: Kyle Samani, Partner at Multicoin Capital
Original compilation: Luffy, Foresight News
In the past two years, the blockchain scalability debate has focused on the central topic of modularization and integration.
Note that discussions in cryptocurrency often conflate single and integrated systems. The technical debate on integrated vs. modular systems has spanned40 years of history. Far from being a new debate, this conversation in the cryptocurrency space should be framed through the same lens as history.
When considering modularity versus integration, the most important design decision a blockchain can make is the extent to which the complexity of the stack is exposed to application developers. The customers of the blockchain are application developers, so final design decisions should take their stance into consideration.
Today, modularity is largely hailed as the primary way blockchains scale. In this article, I’ll challenge this assumption from first principles, uncover the cultural myths and hidden costs of modular systems, and share the conclusions I’ve drawn from thinking about this debate over the past six years.
Modular systems increase development complexity
By far the biggest hidden cost of modular systems is the added complexity of the development process.
Modular systems greatly increase the complexity that application developers must manage, both in the context of their own applications (technical complexity) and in the context of interactions with other applications (social complexity).
In the context of cryptocurrencies, modular blockchains theoretically allow for greater specialization, but at the cost of creating new complexity. This complexity (both technical and social in nature) is being passed on to application developers, ultimately making it harder to build applications.
For example, consider OP Stack. As of now, it seems to be the most popular modular framework. OP Stack forces developers to adopt Law of Chains(which introduces a lot of social complexity), or forked and managed separately. Both options create significant downstream complexity for builders. If you choose to fork, will you receive technical support from other ecosystem participants (CEX, fiat on-ramp, etc.) who must incur costs to comply with new technology standards? If you choose to follow the Law of Chains, what rules and constraints will be imposed on you today and tomorrow?

Source: OSI model
Modern operating systems (OS) are large, complex systems containing hundreds of subsystems. Modern operating systems handle layers 2-6 in the diagram above. This is a classic example of integrating modular components to manage the complexity of the stack exposed to application developers. Application developers dont want to deal with anything below layer 7, which is why operating systems exist: to manage the complexity of the layers below so that application developers can focus on layer 7. Therefore, modularity should not be a goal in itself, but a means to an end.
Every major software system in the world today—cloud backends, operating systems, database engines, game engines, etc.—is highly integrated and composed of many modular subsystems. Software systems tend to be highly integrated to maximize performance and reduce development complexity. The same is true for blockchain.
By the way, Ethereum is reducing the complexity that emerged during the 2011-2014 Bitcoin fork era. Modularity proponents often emphasize the Open Systems Interconnection (OSI) model, arguing that data availability (DA) and execution should be separated; however, this argument is widely misunderstood. A proper understanding of the issue at hand leads to the opposite conclusion: the argument that OSI is an integrated system rather than a modular system.
Modular chains cannot execute code faster
By design, a common definition of a modular chain is the separation of data availability (DA) and execution: one set of nodes is responsible for DA, while another set (or sets) of nodes is responsible for execution. The node collections do not have to have any overlap, but they can.
In practice, separating DA and execution does not inherently improve the performance of either; rather, some hardware somewhere in the world must execute DA, and some hardware somewhere must implement execution. Separating these functions does not improve the performance of any of them. While separation can reduce computational costs, it can only be reduced by centralizing execution.
To reiterate: Regardless of the modular or integrated architecture, some hardware somewhere has to do the work, and separating DA and execution onto separate hardware does not inherently speed up or increase total system capacity.
Some argue that modularity allows multiple EVMs to run in parallel in a rollup fashion, allowing execution to scale horizontally. While this is theoretically correct, this view actually emphasizes the limitations of the EVM as a single-threaded processor, rather than the fundamental premise of separating DA and execution in the context of scaling the overall throughput of the system.
Modularity alone does not improve throughput.
Modularity increases transaction costs for users
By definition, each L1 and L2 is an independent asset ledger with its own state. These separate pieces of state can communicate, albeit with longer transaction latencies and more complex situations for developers and users (via cross-chain bridges like LayerZero and Wormhole).
The more asset ledgers there are, the more fragmented the global state of all accounts is. This is terrible for both chains and users across multiple chains. State fragmentation may bring a series of consequences:
Reduced liquidity, leading to higher transaction slippage;
More total Gas consumption (cross-chain transactions require at least two transactions on at least two asset ledgers);
Increased cross-asset ledger double calculation (thereby reducing the total throughput of the system): When the price of ETH-USDC changes on Binance or Coinbase, arbitrage opportunities will appear on each ETH-USDC pool of all asset ledgers (you can easily imagine A world where every time the ETH-USDC price moves on Binance or Coinbase, there are 10+ transactions on various asset ledgers. Keeping prices consistent in a fragmented state is an extremely inefficient use of block space ).
It is important to realize that creating more asset ledgers significantly increases costs across all of these dimensions, especially those associated with DeFi.
The primary input to DeFi is on-chain state (i.e. who owns which assets). When teams launch appchains/rollups, they naturally create state fragmentation, which is very detrimental to DeFi, whether its the developers managing the complexity of the app (bridges, wallets, latency, cross-chain MEV, etc.), or the users (Slippage, settlement delays).
The most ideal condition for DeFi is that assets are issued on a single asset ledger and traded within a single state machine. The larger the asset ledger, the more complexity application developers must manage and the higher the cost users must bear.
App Rollup won’t create new revenue opportunities for developers
AppChain/Rollup proponents argue that incentives will steer app developers to develop Rollups rather than build on L1 or L2 so that apps can capture MEV value themselves. However, this thinking is flawed, as running an application rollup is not the only way to capture MEV back to application layer tokens, nor is it the best way in most cases. Application layer tokens can capture MEV back into their own tokens simply by coding logic in smart contracts on the universal chain. Lets consider a few examples:
Liquidation: If a Compound or Aave DAO wants to capture a portion of the MEV flowing to a liquidation bot, they can simply update their respective contracts so that a portion of the fees currently flowing to the liquidator goes to their own DAO, no new chain/Rollup required .
Oracle: Oracle tokens can capture MEV by providing back running services. In addition to price updates, oracles can also bundle any arbitrary on-chain transactions that are guaranteed to run immediately after a price update. Therefore, oracles can capture MEV by providing back running services to searchers, block builders, etc.
NFT Minting: NFT minting is rife with scalping bots. This can be easily mitigated by simply coding the reallocation of declining profits. For example, if someone attempts to resell their NFT within two weeks of minting, 100% of the proceeds will go back to the issuer or DAO. This percentage may change over time.
There is no universal answer to capturing MEV into application layer tokens. However, with a little thought, application developers can easily capture MEV back into their own tokens on the universal chain. Launching a brand new chain is simply unnecessary and will bring additional technical and social complexity to developers, as well as more wallet and liquidity concerns to users.
Application Rollup cannot resolve cross-application congestion issues
Many believe that Application Chain/Rollup ensures that applications are not affected by gas spikes caused by other on-chain activities, such as popular NFT minting. This view is partly true, but mostly wrong.
This is a historical problem, and the root cause is the single-threaded nature of the EVM, not because there is no separation of DA and execution. All L2s pay fees to L1, and L1 fees can increase at any time. During the memecoin craze earlier this year, transaction fees on Arbitrum and Optimism briefly exceeded $10. More recently, Optimism fees have also spiked following the launch of Worldcoin.
The only solution to deal with fee peaks is to: 1) maximize L1 DA, 2) refine the fee market as much as possible:
If L1s resources are constrained, peak usage in each L2 will be passed to L1, which will impose higher costs on all other L2s. Therefore, Application Chain/Rollup is not immune to Gas spikes.
Coexistence of numerous EVM L2s is just a crude way of trying to localize the fee market. Its better than putting the fee market in a single EVM L1, but it doesnt solve the core problem. When you realize that the solution isLocalization fee market, the logical endpoint is the fee market for each state (rather than the fee market for each L2).
Other chains have come to this conclusion. Solana and Aptos naturally localize the fee market. This required years of extensive engineering work for the respective execution environments. Most modular proponents severely underestimate the importance and difficulty of engineering local fee markets.
By launching multiple chains, developers are not unlocking real performance gains. When there are applications driving transaction volumes, the cost of all L2 chains will be affected.
Flexibility is overrated
Proponents of modular chains argue that modular architecture is more flexible. This statement is obviously true, but does it really matter?
For six years, Ive been trying to find application developers that had meaningful flexibility that a generic L1 couldnt provide. But so far, outside of three very specific use cases, no one has been able to articulate why flexibility is important and how it directly helps scale. Three specific use cases where I find flexibility important are:
Applications that take advantage of the hot state. The hot state is the state necessary to coordinate some set of operations in real time, but it will only be submitted to the chain temporarily and will not exist forever. A few examples of thermal states:
Limit orders in DEXs such as dYdX and Sei (many limit orders end up being canceled).
Order flow is coordinated and identified in real-time in dFlow, a protocol that facilitates a decentralized order flow marketplace between market makers and wallets.
Oracles such as Pyth, which is a low-latency oracle. Python runs as a chain of independent SVMs. Pyth generates so much data that the core Pyth team decided it was best to send high-frequency price updates to a standalone chain and then use Wormhole to bridge prices to other chains as needed.
Modify the consensus chain. The best examples are Osmosis (where all transactions are encrypted before being sent to validators), and Thorchain (where transactions within a block are prioritized based on fees paid).
An infrastructure is required that leverages Threshold Signature Scheme (TSS) in some way. Some examples of this are Sommelier, Thorchain, Osmosis, Wormhole, and Web3 Auth.
With the exception of Pyth and Wormhole, all examples listed above are built using the Cosmos SDK and run as standalone chains. This speaks volumes about the applicability and scalability of the Cosmos SDK for all three use cases: hot states, consensus modifications, and Threshold Signature Scheme (TSS) systems.
However, most of the items in the three use cases above are not applications, they are infrastructure.
Python and dFlow are not applications, they are infrastructure. Sommelier, Wormhole, Sei, and Web3 Auth are not applications, they are infrastructure. Among them, there is only one specific type of user-facing application: DEX (dYdX, Osmosis, Thorchain).
For six years, Ive been asking Cosmos and Polkadot supporters about the use cases that result from the flexibility they offer. I think theres enough data to make some inferences:
First of all, infrastructure examples should not exist as Rollups because they either produce too much low-value data (such as hot states, and the whole point of hot states is that the data is not committed back to L1), or because they do something intentionally inconsistent with the assets on the ledger. Status update related functionality (for example, all TSS use cases).
Second, the only type of application Ive seen that would benefit from changing the design of the core system is a DEX. Because DEXs are flooded with MEVs, and Universal Chains cannot match the latency of CEXs. Consensus is the basis of transaction execution quality and MEV, so changes based on consensus will naturally bring many innovation opportunities to DEX. However, as mentioned earlier in this article, the main input to a spot DEX is the asset being traded. DEXs compete for assets and thus for asset issuers. Under this framework, a stand-alone DEX chain is unlikely to succeed, as asset issuers primary consideration when issuing assets is not DEX-related MEV, but general smart contract functionality and the incorporation of this functionality into developers respective applications .
However, derivatives DEXs do not need to compete for asset issuers. They mainly rely on collateral such as USDC and oracle machines to feed prices, and essentially must lock user assets to mortgage derivatives positions. Therefore, in the sense of independent DEX chains, they are most likely to apply to derivatives-focused DEXs such as dYdX and Sei.
Lets consider the common integrated L1 applications that currently exist, including: games, DeSoc systems (such as Farcaster and Lens), DePIN protocols (such as Helium, Hivemapper, Render Network, DIMO, and Daylight), Sound, NFT exchanges, and more. None of these particularly benefit from the flexibility brought by modifying the consensus, and their respective asset ledgers have a fairly simple, obvious, and common set of requirements: low fees, low latency, access to spot DEX, access to stablecoins, and access to fiat channels, For example CEX.
I believe we now have enough data to say to some extent that the vast majority of user-facing applications have the same general requirements enumerated in the previous paragraph. While some applications can optimize other variables at the margin with custom features in the stack, the trade-offs that come with these customizations are usually not worth it (more bridges, less wallet support, less indexing / Inquiry program support, reduction of legal currency channels, etc.).
Rolling out new asset ledgers is one way to achieve flexibility, but it rarely adds value and almost always introduces technical and social complexity with little ultimate benefit to application developers.
Extended DA does not require re-mortgaging
Youll also hear modular proponents talk about rehypothecation in the context of scaling. This is the most speculative argument made by modular chain proponents, but it’s worth discussing.
It roughly states that due to rehypothecation (eg, through systems like EigenLayer), the entire crypto ecosystem can rehypothecate ETH an unlimited number of times, enabling an unlimited number of DA layers (eg, EigenDA) and execution layers. Therefore, while ensuring the appreciation of ETH value, scalability is solved from all aspects.
Despite the huge uncertainty between the current situation and the theoretical future, we take it for granted that all stratification assumptions work as advertised.
The current DA of Ethereum is about 83 KB/s. With the introduction of EIP-4844 later this year, that speed can roughly double to about 166 KB/s. EigenDA can add an additional 10 MB/s, but requires a different set of security assumptions (not all ETH will be re-hypothecated to EigenDA).
In comparison, Solana currently provides a DA of about 125 MB/s (32,000 shreds per block, 1,280 bytes per shred, 2.5 blocks per second). Solana is much more efficient than Ethereum and EigenDA. Furthermore, according toNelsons Law, Solana’s DA expands over time.
There are many ways to extend DA through restaking and modularization, but these mechanisms are simply not necessary today and would introduce significant technical and social complexities.
Built for application developers
After years of thinking about it, Ive come to the conclusion that modularity should not be a goal in itself.
A blockchain must serve its customers (i.e. application developers), therefore, a blockchain should abstract infrastructure-level complexity so that developers can focus on building world-class applications.
Modularity is great. But the key to building winning technologies is figuring out which parts of the stack need to be integrated and which parts are left to others. As it stands, chains that integrate DA and execution inherently provide a simpler end-user and developer experience, and will ultimately provide a better foundation for best-in-class applications.


