BTC
ETH
HTX
SOL
BNB
View Market
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt

Daniil and David Liberman: AI is Not Just a Battle of Models, But a Battle of Computing Power Infrastructure

Gonka_ai
特邀专栏作者
@gonka_ai
2026-03-13 14:09
This article is about 4847 words, reading the full article takes about 7 minutes
Artificial intelligence has never been a neutral technology; the computing power infrastructure determines who AI ultimately serves.
AI Summary
Expand
  • Core Argument: The article's central thesis is that the future development of artificial intelligence and whom it serves fundamentally depends on who controls the computing power infrastructure, not merely on models or algorithms. Currently, computing power is increasingly concentrated among a few centralized entities, creating a "computing power divide" and risks of lock-in. Therefore, there is a need to build a more efficient, open, and decentralized AI infrastructure driven by actual computational contributions.
  • Key Elements:
    1. Current AI computing power is highly concentrated, controlled by a few cloud service providers and specific countries, leading to high access costs, uneven distribution, and the formation of a "computing power divide."
    2. Many existing decentralized systems have flaws, with significant computing power consumed by consensus mechanisms, and incentive mechanisms failing to effectively reward genuine computational contributions.
    3. The key for enterprises in choosing AI infrastructure lies in strategic flexibility. While early reliance on centralized solutions is convenient, it creates long-term lock-in that is difficult to reverse, increasing future switching costs.
    4. Infrastructure choices have long-term societal impacts. Centralized architectures may entrench inequality, limiting social mobility and opportunities for the next generation's innovation and development.
    5. The Gonka.ai project practices an alternative path, aiming to build a decentralized AI computing power network that maximizes the utilization of global GPUs, rewards genuine computational contributions, and provides permissionless access.

Author | Gonka.ai

Daniil and David Liberman: Artificial Intelligence Is Not Neutral - Infrastructure Determines Who Holds Power

Foreword: Amidst the ongoing global AI discourse, industry focus often centers on model capabilities, technological breakthroughs, and regulatory frameworks. However, beneath these discussions, a more fundamental question is emerging: Who ultimately controls the computational power infrastructure of AI? In a conversation at the Unlockit Conference, Daniil and David Liberman, co-creators of the Gonka protocol, futurists, entrepreneurs, and investors, presented a core argument: Artificial intelligence has never been a neutral technology; the computational power infrastructure determines whom AI ultimately serves. In their view, the future of AI is not just a technological race but a long-term game centered on the control of infrastructure.

The True Foundation of AI: Not Models, but Compute

Centralized AI infrastructure only appears inevitable when people do not question its underlying assumptions.

For a long time, most discussions about artificial intelligence have focused on models, ethics, or regulation. But beneath these lies an even more decisive layer—computational power. Who owns the compute, who controls access to it, and under what conditions it can be used ultimately determine how AI operates and whom it serves.

Once AI is viewed from this perspective, the current landscape becomes hard to ignore. Research from the OECD and other public data indicates that advanced AI compute is increasingly concentrated in the hands of a few cloud service providers and within a limited number of countries. This creates a widening "compute divide"—a gap between those who have access to the infrastructure and those who do not.

This concentration is not accidental. Today, access to advanced GPUs is controlled by a handful of providers and is increasingly influenced by national-level priorities. The result is expensive, capacity-constrained, and geographically uneven compute distribution—all at a time when AI is becoming critical to scientific, industrial, and social infrastructure.

Meanwhile, current decentralized systems do not automatically solve this problem. Many decentralized systems still consume significant compute on consensus and security overhead, while incentive mechanisms often reward capital rather than genuine computational contribution. This discourages hardware providers and slows innovation at the infrastructure level.

This is where our thinking diverges. We do not start from an ideological stance, nor do we choose decentralization simply to oppose centralized players. We begin with a more practical question: What would AI infrastructure look like if efficiency, access, and contribution could be aligned, rather than being in conflict?

This question ultimately led us to a model: where most compute is used for real AI work, not system overhead; where participation and governance rights are determined by verified computational contribution, not capital; and where access to global GPU resources is designed to be permissionless. In practice, these assumptions are also continuously stress-tested through ongoing open discussions, including real-time collaboration with GPU operators, developers, and researchers—for example, in our Discord community.

AI has never been just software. It has always been infrastructure. And infrastructure choices often lock societies into developmental trajectories for decades. Placing this infrastructure under the jurisdiction of a few corporations or nations is not a neutral technical outcome but a structural decision with long-term economic and geopolitical consequences. If intelligence itself is to become abundant, the infrastructure supporting it must be designed for "abundance" from the outset.

The True Success Criteria for Decentralized AI

The difficulty lies mainly in that you are not arguing with people, but with "default assumptions."

The mainstream tech community often optimizes for what works in the short term: speed, capital efficiency, centralized control, and scale through consolidation. These choices are locally rational, but once they become the default, they are rarely questioned. When you challenge these default assumptions, it feels like speaking a different language—not because the ideas are extreme, but because they touch the incentive structures upon which many careers, companies, and strategies are built.

Even more difficult is the issue of timing. Centralized systems often appear very successful before their long-term costs become apparent. While massive investments and infrastructure spending are already visible, the deeper costs often manifest later—such as increased dependency, loss of flexibility, pricing power concentrated in the hands of a few providers, and the inability to change course once the system is deeply embedded.

For us, success does not mean winning an argument or displacing existing players. What success looks like is actually much quieter. Success is when decentralized infrastructure ceases to be a manifesto and becomes mundane: when people use it not because they believe in decentralization, but because it is the most practical choice.

Ultimately, true success is when the entire discussion itself changes. When the question is no longer "Should intelligence be centralized?" but "Why did we ever think it had to be centralized?" By that point, beliefs no longer need to be directly challenged; they evolve naturally.

How Do Enterprises Decide Between Centralized and Decentralized Paths?

AI infrastructure is no longer just a technical layer; it is becoming a strategic dependency.

For enterprises, centralized AI infrastructure creates lock-in effects that are difficult to reverse. Once critical systems depend on a few providers, control gradually shifts from the user to the infrastructure owner. Over time, this affects pricing, access, the pace of innovation, and the range of viable strategic choices.

For enterprises, the issue is strategic flexibility. Centralized infrastructure may work well in the early stages but often solidifies into a long-term dependency. Costs become increasingly difficult to control, alternatives harder to adopt, and changing architectural decisions at scale becomes more and more difficult.

The critical decision moment usually comes earlier than most people think. Infrastructure choices are often locked in before their consequences are apparent. Once AI transitions from an experimental phase to daily infrastructure, the cost of changing the underlying architecture increases exponentially. Therefore, the real decision point is not when centralized systems fail, but while they still appear to be working well. Exploring decentralized options early preserves choice; waiting often means the choice has already been made.

If Already Dependent on Centralized Infrastructure, Is It Too Late?

It is rarely truly "too late," but the difficulty increases exponentially over time.

Once most systems are built on centralized AI infrastructure, the challenge is no longer technical but institutional. Workflows, incentive structures, budgets, compliance requirements, and even talent development paths gradually assume that centralization is "just how things work." By then, change is no longer just about migrating infrastructure; it requires unlearning habits, contractual patterns, and mindsets deeply embedded within organizations.

Research on infrastructure lock-in reinforces this. Industry analysis consistently shows that switching costs rise dramatically, not linearly, after years of operating in a centralized cloud environment. This growth stems from long-term contracts, regulatory frameworks, deeply integrated internal processes, and a highly specialized workforce. OECD research also points out that countries and organizations without early access to AI compute face accumulating disadvantages over time, losing not just competitiveness but also architectural freedom—the genuine ability to choose other infrastructure models.

Meanwhile, history shows that infrastructure transitions rarely happen all at once. They usually start at the margins. New use cases, new players, and new constraints create pressure points where centralized systems begin to be insufficient—perhaps too costly, too slow, too restrictive, or too fragile. This is often where alternatives start to matter.

Over time, what truly erodes is "choice." The longer centralized infrastructure dominates, the fewer real options exist.

Dependencies solidify, and decentralization shifts from an active design decision to a reactive correction—one that is always more expensive, more complex, and harder to control.

Therefore, the real risk is not that it's already too late. The real risk is waiting until decentralization is no longer a choice but a necessity forced by systemic failure. The earlier one explores, even just in parallel with centralized solutions, the more space there is to proactively shape the outcome rather than being forced to change under pressure.

For the Next Generation, AI Architecture Will Determine Opportunity Distribution

The next generation needs to understand that technology does not become neutral simply by becoming advanced.

Each generation inherits the infrastructure choices made by its predecessors, often without realizing these choices were once deliberate decisions, not inevitabilities. For the next generation, AI will feel as natural as electricity or the internet does today. Precisely for this reason, the underlying architecture is so crucial—it determines not only what is possible but also for whom it is possible.

The next generation needs to know that access to intelligence can be organized in fundamentally different ways. It can be treated as a shared foundation: open, abundant, and difficult to monopolize. Or it can be fenced off, priced, and controlled, even if it appears convenient and efficient on the surface. Both paths can produce impressive technology, but only one preserves long-term freedom, resilience, and genuine choice.

They should also understand that centralization often arrives quietly. Not through coercion, but through convenience. The initial trade-offs often seem small: slightly lower costs, faster deployment, simpler coordination. But the consequences manifest later—when changing course becomes expensive or nearly impossible.

It is equally important to recognize that infrastructure directly impacts social mobility. Systems that appear technologically neutral can either reduce unequal starting points between people and generations or quietly lock these inequalities in place for decades. As you may know, this is also a topic we care deeply about. The younger generation already faces greater disadvantages at a comparable age than previous generations. The current implementation of AI does not address this issue and may even exacerbate it. In this sense, architectural choices determine not only efficiency but also who truly has the opportunity to experiment, build, and shape the future.

Most importantly, the next generation needs to understand that these systems are still designed by people. Not by fate, not by "the market," and not by the machines themselves. Questioning default assumptions, asking who benefits from a given architecture, and insisting on preserving choice is not resistance to progress. It is precisely how to keep progress open.

Why Choose to Share These Stories at Unlockit?

Unlockit seems to be a discussion space where conversations are not centered on hype, launches, or predictions, but on why people make certain choices. This is important to us. Our story is not really about a specific project or technology, but about identifying structural patterns early and deciding not to treat them as inevitable.

For years, we have operated within mainstream systems: building companies, investing, collaborating with large organizations, and benefiting from centralized infrastructure. We understand from the inside how these systems work. At some point, we realized that repeating the same structures while hoping for different outcomes usually does not produce anything genuinely new. Rather than staying silent or packaging this realization as another success story, we chose to share it openly.

At the same time, we came to Unlockit not only to reflect but also to share practical experiences that have real-world relevance for different groups present. For entrepreneurs, these issues involve infrastructure control, dependency on providers, and the ability to scale without losing flexibility. For investors, they involve long-term risks, infrastructure lock-in, and which models truly create enduring value. For corporate and technology leaders, it's about cost structures, reliability, regulatory constraints, and strategic freedom in a rapidly changing environment.

We wanted to share an alternative path that is already operating in practice—not as a universal answer, but as a different way of thinking: how to build AI infrastructure with fewer dependencies, greater transparency, and more long-term choice. Equally important, we also wanted to hear feedback from those making real decisions at the business, capital, and institutional levels.

We also believe these discussions should not be confined to insiders. Once infrastructure decisions cease to be publicly debated, they quietly solidify into default choices. Unlockit provides a space to reflect on these choices before they become irreversible, which makes participating in this conversation meaningful.

Ultimately, participating in Unlockit is not about explaining what we are doing, but about illustrating why questioning default assumptions still matters, especially in an era where technological progress appears rapid, powerful, and inevitable. It is also about listening to the perspectives of those shaping the future of business, technology, and societal systems.

About Gonka.ai

Gonka is a decentralized network designed to provide efficient AI compute, aiming to maximize the utilization of global GPU power for meaningful AI workloads. By eliminating centralized gatekeepers, Gonka offers developers and researchers permissionless access to compute resources while rewarding all participants through its native token, GNK.

Gonka was incubated by the US AI developer Product Science Inc. The company was founded by Web 2 industry veterans, former Snap Inc. core product directors, the Liberman siblings, and successfully raised $18 million in 2023, with an additional $51 million in 2025. Investors include OpenAI backer Coatue Management, Solana backer Slow Ventures, Bitfury, K5, Insight and Benchmark partners, among others. Early contributors to the project include notable leaders in the Web 2-Web 3 space such as 6 blocks, Hard Yaka, and Gcore.

Official Website | Github | X | Discord | Telegram | Whitepaper | Tokenomics | User Guide

technology
AI
Welcome to Join Odaily Official Community