When Computing Power Becomes Infrastructure: Further Reflections on the Decentralized Path of AI
- Core Viewpoint: The article argues that the competitive focus in the AI industry is shifting from model capabilities to the control and allocation of computing power. The AI computing power issue is essentially a problem of infrastructure protocol, requiring an open, decentralized protocol layer to coordinate global resources and prevent computing power from being monopolized by a few centralized entities.
- Key Elements:
- The core bottleneck for AI development has shifted from model capabilities to the accessibility of computing power. Computing power is evolving into a form of structural power, with its production and scheduling being highly centralized.
- Drawing on the logic of Bitcoin coordinating global physical resources, AI computing power requires an open protocol layer that incentivizes genuine contributions and verifiable work, rather than closed commercial packaging.
- The project chooses to start with AI inference because its workload is continuous and measurable. It represents an urgent computing power bottleneck in current production environments, making it suitable for testing the efficiency of decentralized networks.
- By designing random inference tasks that cannot be pre-computed and where the cost of faking is higher than performing the work, combined with a spot-check mechanism, the authenticity of computational contributions in the decentralized network is ensured.
- The project's positioning is not to replace centralized giants but to address the open infrastructure layer they struggle to cover, enabling hardware providers and developers to engage directly in competition around computing power.
- Computing power is constrained by chips, energy, and coordination efficiency; it is not an infinitely supplied commodity. Stable, scalable computing power supply will become a scarce source of structural value.
In previous articles, we have repeatedly mentioned a judgment: the AI industry is undergoing a structural shift—the competitive focus is moving from model capabilities to the control and allocation of computing power.
Models can be replicated, algorithms can be caught up with, but the production, distribution, and control of computing power are rapidly centralizing, gradually determining who can truly participate in the next phase of AI competition.
This is not an emotional judgment, but a result of long-term observation of industry, technology, and infrastructure evolution.
In this article, based on this judgment, we further supplement a frequently overlooked but extremely critical perspective: the AI computing power issue is, in essence, an infrastructure protocol problem, not merely a technical or product problem.
1. The Real Bottleneck of AI is No Longer at the Model Layer
A fact repeatedly overlooked in today's AI industry is that what restricts AI development is no longer model capability, but the accessibility of computing power.
A common feature of the current mainstream AI systems is that models, computing power, interfaces, and pricing power are highly coupled within the same centralized entities. This is not a "choice" made by a single company or country, but a natural outcome of a capital-intensive industry lacking open coordination mechanisms.
When computing power is packaged and sold as "cloud services," decision-making power naturally concentrates in the following directions:
- Chip manufacturing capability
- Energy and data center scale
- Capital structure and geopolitical advantages
This causes computing power to gradually evolve from a "resource" into a structural power. Consequently, computing power becomes expensive with highly opaque pricing, subject to geopolitical factors, energy, and export controls, and highly unfriendly to developers and small-to-medium-sized teams.
The production, deployment, and scheduling of advanced GPUs are highly concentrated in the hands of a few hardware manufacturers and hyperscale cloud service providers, affecting not only startups but also the AI competitiveness of entire regions and nations. For many developers, computing power has evolved from a "technical resource" to an "entry barrier." The issue is not just about price, but about whether one can obtain long-term, predictable computing capacity, whether one is locked into a single technology and supply system, and whether one can participate in the underlying compute economy itself.
If AI is to become a general-purpose foundational capability, then the mechanism for producing and distributing computing power should not remain in a highly closed state for long.
2. From Bitcoin to AI: The Common Logic of Infrastructure Protocols
We mention Bitcoin not to discuss its price or financial attributes, but because it is one of the few truly successful protocol systems that coordinates global physical resources.
What Bitcoin solves is never just a "ledger" problem, but three more fundamental problems:
- How to incentivize strangers to continuously contribute real-world resources
- How to verify that these resources are indeed contributed and produce work
- How to maintain long-term system stability without a central controller
It uses an extremely simple, yet difficult-to-circumvent method to transform hardware and energy into verifiable "contributions" within the protocol.
AI computing power is moving towards a position strikingly similar to that of energy and computing power in the past.
When a capability is sufficiently foundational and scarce, what it ultimately needs is not more sophisticated commercial packaging, but a protocol layer capable of coordinating resources over the long term.
In the Gonka network:
- "Work" is defined as verifiable AI computation itself
- Incentives and governance rights stem from real computing power contributions, not capital or narratives
- GPU resources are used as much as possible for meaningful AI work, not abstract security consumption
This is an attempt to redefine computing power as "open infrastructure."
3. Why Start with AI Inference, Not Training?
We chose to start with AI Inference, not because training is unimportant, but because inference has become the most urgent computing power bottleneck in the real world.
As AI moves from experimentation to production environments, the cost, stability, and predictability of continuous inference are becoming the real concerns for developers. And it is precisely in this area that the limitations of centralized cloud services are most apparent.
From a network design perspective, inference possesses several key characteristics:
- Workload is continuous and measurable
- More suitable for efficiency optimization in decentralized environments
- Can genuinely test whether the computing power verification and incentive mechanisms are valid
Training is certainly important, and we also plan to introduce training capabilities in the future, and have allocated part of the network revenue to support long-term training needs. But infrastructure must first prove itself in real-world demand.
5. How Can Decentralized Computing Power Avoid "Fake Computation"?
A common question is: In a decentralized environment, how do you ensure nodes are actually performing AI computation, not fabricating results?
Our answer is: Embed the verification logic into the computation itself, making influence derive from continuous, genuine computational contributions.
The network, through short-cycle computation phases (Sprints), requires nodes to perform inference tasks on randomly initialized large Transformer models. These tasks:
- Cannot be pre-computed
- Cannot reuse historical results
- Cost more to fake than to perform genuinely
The network does not perform full verification on every computation but uses continuous random sampling and dynamically increasing verification intensity to make faking economically unviable. Nodes that consistently submit correct results over time naturally gain higher participation and influence.
6. Competing with Centralized Giants, or Solving Problems at Different Layers?
We are not trying to "replace" OpenAI, Google, or Microsoft.
Large tech companies build efficient AI stacks within closed systems, which is their strength. But this model inherently leads to:
- Restricted access
- Opaque pricing
- Concentration of capabilities in a few entities
We focus on the layers these systems struggle to cover: open, verifiable, infrastructure-level computing power coordination.
It is not a service, but a market and protocol, allowing hardware providers and developers to directly negotiate around computing power efficiency and authenticity.
7. Will Computing Power Be "Commoditized"? Where Will Value Flow?
Many believe that as inference costs decline, value will ultimately concentrate at the model layer. But this judgment often overlooks a premise:
Computing power is not an infinitely supplied commodity.
Computing power is constrained by:
- Chip manufacturing capability
- Energy and geographical distribution
- Infrastructure coordination efficiency
When inference demand continues to grow globally, what will truly be scarce is stable, predictable, and scalable computing power supply. And whoever can coordinate these resources holds structural value.
What we are trying to do is not own models, but enable more participants to directly engage in the compute economy itself, rather than just being "paying users."
8. Why is Decentralized Computing Power a Long-term Proposition?
Our judgment does not come from theory, but from the practical experience of building AI systems in centralized environments.
When AI becomes a core capability, computing power decisions often cease to be technical problems and become strategic ones. This centralization is expanding from the commercial level to the geopolitical and sovereign levels.
If AI is the new infrastructure, then the way computing power is coordinated will determine the openness of future innovation.
Historically, every technological wave that truly unleashed productivity ultimately required an open infrastructure layer. AI will be no exception.
Conclusion: Two Future Paths
We are heading towards one of two possible futures:
- Computing power continues to be concentrated by a few companies and nations, and AI becomes a closed capability
- Or, global computing power is coordinated through open protocols, allowing value to flow to real contributors
Gonka does not claim to be the answer, but we know which side we stand on.
If AI is to profoundly change the world, then the computing power infrastructure supporting it also deserves to be redesigned.
About Gonka.ai
Gonka is a decentralized network designed to provide efficient AI computing power, aiming to maximize the utilization of global GPU resources for meaningful AI workloads. By eliminating centralized gatekeepers, Gonka offers developers and researchers permissionless access to computing resources, while rewarding all participants through its native token, GNK.
Gonka is incubated by the US AI developer Product Science Inc. The company was founded by the Liberman siblings, seasoned Web 2 industry veterans and former core product directors at Snap Inc. It successfully raised $18 million in 2023 and an additional $51 million in 2025. Investors include Coatue Management (an investor in OpenAI), Slow Ventures (an investor in Solana), Bitfury, K5, Insight, and Benchmark partners. Early contributors to the project include well-known leaders in the Web 2-Web 3 space such as 6 blocks, Hard Yaka, and Gcore.
Official Website | Github | X | Discord | Telegram | Whitepaper | Tokenomics | User Guide



