Editor’s note: This post is a spiritual successor to Technical Scalability Creates Social Scalability. Click here to read it.
Over the past two years, the scaling debate has narrowed and fixated on the central question of modularity vs integration.
(Note that discourse in crypto often conflates “monolithic” and “integrated” systems. There is a rich history of debate in technology over the last 40 years about integrated vs. modular systems at every layer of the stack. The crypto incarnation of this dialogue should be framed through the same lens; this is far from a new debate).
In thinking through modularity vs. integration, the most important design decision a chain can make is how much complexity to expose up the stack to application developers. The customers of blockchains are application developers, thus design decisions should ultimately be considered for them.
Today, modularity is largely hailed as the primary way that blockchains will scale. In this post, I will question that assumption from first principles, surface the cultural myths and hidden costs of modular systems, and share the conclusions I’ve developed over the past six years of contemplating this debate.
By far the largest hidden cost of modular systems is developer complexity.
Modular systems substantially increase complexity that application developers must manage, both in the context of their own applications (technical complexity), and in the context of interfacing with other applications and pieces of state (social complexity).
In the context of crypto systems, the modular blockchains we see today allow for theoretically more specialization but at the expense of creating new complexity. This complexity—both technical and social in nature—is being passed up the stack to application developers, which ultimately makes it more difficult to build.
For example, consider the OP stack, which seems to be the leading modular framework as of August 2023. The OP stack forces developers to opt into a Law of Chains (which come with a lot of social complexity, as suggested by the name), or to fork away and manage the OP stack on a stand alone basis. Both options create huge amounts of downstream complexity for builders. If you fork off and go your own route, are you going to receive technical support from other ecosystem players (CEXs, fiat on-ramps, etc.) that have to incur costs to conform to a new technical standard? If you opt into the Law of Chains, what rules and constraints are you placing on yourself today and, even more importantly, tomorrow?
Source: The OSI Model
Modern operating systems (OSes) are large and complex systems comprising hundreds of subsystems. Modern OSes handle layers 2-6 in the image above. That is the quintessential example of integrating modular components to manage the complexity that is exposed up the stack to application developers. Application developers do not want to deal with anything below layer 7, and that is precisely why OSes exist: OSes manage the complexity of the layers below so that application developers don’t have to. Therefore, modularity in and of itself should not be the goal but rather a means to an end.
Every major software system in the world today—cloud backends, OSes, database engines, game engines, etc.—are highly integrated and simultaneously composed of many modular subsystems. Software systems tend to integrate overtime to maximize performance and minimize developer complexity. Blockchains won’t be different.
(As an aside, the primary breakthrough of Ethereum was reducing complexity that emerged from the era of Bitcoin forks in 2011-2014. Modular proponents frequently highlight the Open Systems Interconnection (OSI) model to argue that data availability (DA) and execution should be separated; however, this argument is widely misunderstood. A correct first-order understanding of the issues at hand leads to the opposite conclusion: using OSI as an analog is an argument for integrated systems rather than modular ones.)
By design, the common definition of “modular chains” is the separation of data availability (DA) and execution: one set of nodes performs DA, while another set (or sets) performs execution. The node sets don’t have to have any overlap at all, but they can.
In practice, separating DA and execution doesn’t inherently improve the performance of either; at the end of the day, some piece of hardware somewhere in the world has to perform DA, and some piece of hardware somewhere has to perform execution. Separating those functions does not increase the performance of either. Separation can, however, reduce the cost of compute, but only by centralizing execution.
Again, this is worth reiterating: regardless of modular vs. integrated architecture, some piece of hardware somewhere has to do the work, and pushing DA and execution to separate pieces of hardware doesn’t intrinsically accelerate either or increase total system capacity.
Some argue that modularity allows for the proliferation of many EVMs to run in parallel as roll ups, which enables execution to scale horizontally. While this is theoretically correct, this comment actually highlights the constraints of the EVM as a single-threaded processor rather than addressing the fundamental premise of separating DA and execution in the context of scaling total system throughput.
Modularity alone doesn’t increase throughput.
By definition, each L1 and L2 is a distinct asset ledger with its own state. Those separate pieces of state can communicate, albeit with more latency and with more developer- and user-complexity (i.e., via bridges, such as LayerZero and Wormhole).
The more asset ledgers there are, the more the global state of all accounts fragments. This is unilaterally terrible for chains and users across many fronts. Fragmented state results in
It is important to recognize that creating more asset ledgers explicitly compounds costs along all of these dimensions, especially as it pertains to DeFi.
The primary input to DeFi is on-chain state (aka who owns which assets). As teams launch app chains/roll ups, they will naturally fragment state, which is strictly bad for DeFi, both in terms of managing complexity for application developers (bridges, wallets, latency, cross-chain MEV, etc.) and users (wider spreads, longer settlement times).
DeFi works best when assets are issued on a single asset ledger and trading occurs within a single state machine. The more asset ledgers, the more complexity application developers must manage, and the more costs users must bear.
Proponents of app chains/rollups argue that incentives will lead app developers to build roll ups rather than on an L1 or L2 so that they can capture MEV back to their own tokens. However, this thinking is flawed because running an app roll-up is not the only way to capture MEV back to an application-layer token, and, in most cases, not the optimal way. Application layer tokens can capture MEV back to their own tokens simply by encoding logic in smart contracts on a general-purpose chain. Let’s consider a few examples:
There is no universal answer to capturing MEV to an application-layer token. However, with a little bit of thinking, app developers can easily capture MEV back to their own tokens on general-purpose chains. Launching an entirely new chain is simply unnecessary, creates additional technical and social complexity for developers to manage, and creates more wallet and liquidity challenges for users.
Many have argued that appchains / roll ups ensure that a given app isn’t impacted by a gas spike caused by other on-chain activity, such as a popular NFT mint. This view is partially right, but mostly wrong.
The reason this has been a problem historically is primarily a function of the single-threaded nature of EVM, rather than because of lack of separation of DA and execution. All L2’s pay fees to the L1, and L1 fees can increase at any time. During the meme coin frenzy earlier this year, trading fees on Arbitrum and Optimism exceeded $10. More recently, fees on Optimism spiked in the wake of the Worldcoin launch.
The only solution to fee spikes is to both: 1) maximize L1 DA, and 2) make fee markets as granular as possible:
If the L1’s resources are constrained, usage spikes in various L2s will trickle down to the L1, which will impose higher costs on all other L2s. Therefore, appchain / roll ups are not immune to gas spikes.
The co-existence of many EVM L2s is just a crude way to try to localize fee markets. It is better than putting everything in a single EVM L1, but does not address the core problem from first principles. When you recognize that the solution is to localize fee markets, the logical endpoint is fee markets per piece of state (as opposed to fee markets per L2).
Other chains have already come to this conclusion. Both Solana and Aptos naturally localize fee markets. This required a ton of engineering work over many years for their respective execution environments. Most modular proponents severely underweight the importance and difficulty of solving the hard engineering problems that make hyper-local fee markets possible.
Source: https://blog.labeleven.dev/why-solana
By launching many asset ledgers, developers are naturally increasing technical and social complexity without unlocking real performance gains, even during times when other applications are driving heightened volume.
Modular chain proponents argue that modular architectures are more flexible. This statement is obviously true. But it’s not clear that it matters.
For six years I have been trying to find application developers who need meaningful flexibility that general purpose L1’s cannot provide. But so far, outside of three very specific use cases, there hasn’t been a clear articulation as to why flexibility is important, nor how it directly helps with scaling. The three specific use cases I’ve identified in which flexibility is important are:
All of the examples listed above, except for Pyth and Wormhole, are built using the Cosmos SDK, and are running as stand alone chains. This speaks volumes about the quality and extensibility of the Cosmos SDK for all three use cases: hot state, consensus-modification, and threshold signature scheme (TSS) systems.
However, most of the items identified in the three sections above are not apps. They are infrastructure.
Pyth and dFlow are not apps; they are infrastructure. Sommelier (the chain, not the yield-optimizer front end), Wormhole, Sei, and Web3Auth are not apps; they are infrastructure. Of those that are user-facing apps, they are all one specific type: a DEX (dYdX, Osmosis, Thorchain).
I have been asking Cosmos and Polkadot proponents for six years about the use cases that become unlocked from the flexibility they provide. I think there is enough data to infer a few things:
First, the infrastructure examples should not exist as roll ups because they either produce too much low-value data (e.g., hot state, and the whole point of hot state is that the data is not committed back to the L1), or because they perform some function that is intentionally orthogonal to state updates on an asset ledger (e.g., all the TSS use cases).
Second, the only type of app that I’ve seen meaningfully change core system design is a DEX. This makes sense because DEXs are rife with MEV, and because general-purpose chains by definition cannot match the latency of CEXs. Consensus is fundamental to trade execution quality and MEV, and so naturally there is a lot of opportunity for innovation in DEXs based on making changes to consensus. However, as noted earlier in this essay, the primary input into a spot DEX is the assets being traded. DEXs compete for assets, and therefore for asset issuers. In this framing, stand-alone DEX chains are unlikely to succeed, because the primary variable that asset issuers think about at the time of asset issuance is not DEX-related MEV, but general purpose smart contract functionality and the incorporation of that functionality into the app developer’s respective app.
However, this framing of DEXs competing for asset issuers is mostly irrelevant for derivatives DEXs, which primarily rely on USDC collateral and oracle price feeds, and also which inherently must lock user assets to collateralize derivatives positions. As such, to the extent that standalone DEX chains make sense, they are most likely to work for derivatives-focused DEXs such as dYdX and Sei.
(Note: If you are building a new kind of infrastructure that isn’t captured by the categories above, or a consumer-facing app that genuinely requires more flexibility than what general-purpose, integrated L1s can support, please reach out! It has taken six years to distill the above, and I’m sure this list is incomplete.)
Conversely, let’s consider the apps that exist today across general-purpose, integrated L1s. Some examples: Games; Audius; DeSoc systems such as Farcaster and Lens; DePIN protocols such as Helium, Hivemapper, Render Network, DIMO, and Daylight; Sound, NFT exchanges, and many more. None of these particularly benefit from the flexibility that comes with modifying consensus. They all have a fairly simple, obvious, and common set of requirements from their respective asset ledgers: low fees, low latency, access to spot DEXs, access to stablecoins, and access to fiat-on ramps such as CEXs.
I believe we have enough data now to say with some degree of confidence that the vast majority of user-facing applications have the same common set of requirements as enumerated in the prior paragraph. While some applications can optimize for other variables on the margin with customizations down the stack, the trade-offs that come with those customizations are generally not worth it (more bridging, less wallet support, fewer index/query provider support, reduced direct fiat on ramps, etc.).
Launching new asset ledgers is one way to achieve flexibility, but it rarely adds value, and it almost always creates technical and social complexity with minimal ultimate gains for application developers.
You’ll also hear modular proponents talk about restaking in the context of scaling. This is the most speculative argument that modular-chain proponents make, but is worth addressing.
It states roughly that because of restaking (e.g., via systems like EigenLayer), the crypto ecosystem as a whole can restake ETH an infinite number of times to power an infinite number of DA layers (e.g., EigenDA) and execution layers. Therefore, scalability is solved in all respects while ensuring value accrual to ETH.
Although there is a tremendous amount of uncertainty between the status quo and that theoretical future, let’s take it for granted that all of the layered assumptions work as advertised.
Ethereum’s DA today is roughly ~83 KB/s. With EIP 4844 later this year, that roughly doubles to ~166 KB/s. EigenDA adds an additional 10 MB/s, though with a different set of security assumptions (not all ETH will be restaked to EigenDA).
By contrast, Solana today offers a DA of roughly 125 MB/s (32,000 shreds per block, 1,280 bytes per shred, 2.5 blocks per second). Solana is so much more efficient than Ethereum and EigenDA because of its Turbine block propagation protocol, which has been in production for 3 years. Moreover, Solana’s DA scales over time with Nielsen’s Law, which continues unabated (unlike Moore’s Law, which for practical purposes died for single-threaded computation a decade ago).
There are a lot of ways to scale DA with restaking and modularity, but these mechanisms are simply unnecessary today and introduce significant technical and social complexity.
After contemplating this for years, I’ve arrived at the conclusion that modularity should not be a goal in and of itself.
Blockchains must serve their customers—i.e., application developers—and as such, blockchains should abstract infrastructure-level complexity so that entrepreneurs can focus on building world-class applications.
Modular building blocks are great. But the key to building winning technologies is to figure out which pieces of the stack to integrate, and which pieces to leave for others. And as it stands now, chains that integrate DA and execution inherently offer simpler end-user and developer experiences, and will ultimately provide a better substrate for best-in-class applications.
Thanks to Alana Levin, Tarun Chitra, Karthik Senthil, Mert Mumtaz, Ceteris, Jon Charbonneau, John Robert Reed, and Ani Pai for providing feedback on this post.
© 2025 DeFi.io