BTC
ETH
HTX
SOL
BNB
View Market
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt

How to Understand AI Computing Power Tokens? A Comparison from the GPU Market to the Execution Layer

XT研究院
特邀专栏作者
@XTExchangecn
2026-01-30 06:03
This article is about 5673 words, reading the full article takes about 9 minutes
What is the AI Computing Power Market? From centralized clouds to decentralized GPUs, this article explains the supply-demand mechanisms and key players in AI computing power.
AI Summary
Expand
  • Core Viewpoint: The AI computing power market integrates idle GPU resources globally through a decentralized approach, aiming to address structural bottlenecks in current AI development caused by centralized computing resources, such as high costs and difficulty in access, providing developers with an alternative path to computing power beyond traditional cloud services.
  • Key Elements:
    1. Currently, high-end GPU computing power resources are highly concentrated in the hands of a few centralized cloud service providers, leading to monopolistic pricing power, uneven geographical distribution, and creating competitive barriers for small and medium-sized AI teams.
    2. The AI computing power market connects dispersed computing power suppliers (such as data centers, miners) with demanders (such as AI startups), using market-based mechanisms for resource scheduling and settlement.
    3. io.net (IO) represents the computing power aggregation model, emphasizing resource integration and providing a user experience close to cloud services, with the core challenge being maintaining performance and stability after scaling.
    4. iExec (RLC) focuses on secure and verifiable execution, utilizing Trusted Execution Environments (TEE) to meet computing tasks with high requirements for data privacy and integrity.
    5. Phoenix Global (PHB) adopts an ecosystem integration path, treating computing power as part of its multi-layered AI infrastructure platform, aiming to support a complete decentralized AI workflow.
    6. The key to evaluating such projects lies not in nominal computing power scale, but in actual computing power utilization, service stability, and the necessity of the token in real-world usage scenarios.

For many AI developers, the real challenge often lies not in the code, but on the invoice.

Model training requires GPUs, and inference deployment is equally inseparable from GPUs. However, when computing power is long concentrated in the hands of a few platforms, developers often face high costs, uncertain scheduling, and resource priorities that can be adjusted at any time. Over time, computing power itself gradually evolves into an invisible yet tangible barrier.

Within the broader AI narrative framework (previously discussed in the comprehensive analysis of the XT AI Zone), the infrastructure layer is quietly reshaping the way value is distributed. The emergence of the AI compute marketplace is a response to this reality. It attempts to reconnect GPU resources scattered around the globe through a decentralized approach, ensuring computing power no longer belongs solely to a few centralized participants. Whether it's the compute aggregation emphasized by io.net (IO) or the multi-layered AI infrastructure ecosystem built by Phoenix Global (PHB), they are all answering the same question: Can computing power be reorganized like a market, rather than being monopolized?

xt-ai-zone-decentralized-gpu-projects-explained-cover-cn

TL;DR Quick Summary

  • GPU computing power has become a structural bottleneck constraining the development of the AI industry.
  • AI compute marketplaces attempt to promote the decentralization of GPU supply through open mechanisms.
  • IO, RLC, and PHB represent different architectural paths for compute marketplaces.
  • Compared to "quantity of compute," compute utilization, stability, and trust mechanisms are more critical.
  • Before participating in AI infrastructure-related assets, structural understanding should take precedence over narrative judgment.

Why Computing Power is Becoming the New Bottleneck in the AI Market

In the early stages of AI development, technological progress was primarily driven by stronger model capabilities and richer data. But today, the real limiting factor has shifted. Both large model training and scaled inference deployment are highly dependent on sustained, stable GPU computing power, and the growth rate of compute demand is significantly outpacing supply expansion.

Currently, high-end GPU resources are concentrated in the hands of a few centralized cloud service providers, especially in the enterprise market. This centralized landscape is creating a series of ripple effects:

  • Pricing power is long held by the platform side.
  • Compute resources are prioritized for large, established clients.
  • There are significant regional disparities in the availability of high-performance GPUs.

In such an environment, small and medium-sized AI teams, independent developers, and early-stage projects often bear higher costs or face restricted access to computing power. Computing power itself is evolving into an implicit competitive barrier. The ability to obtain GPU resources stably and at low cost increasingly directly determines the feasibility of an AI product's launch and scaled expansion.

It is precisely against this backdrop of structural imbalance that AI compute marketplaces have begun to emerge. They attempt to alleviate the singular dependence on centralized platforms by providing compute access paths different from traditional cloud services, opening up more possibilities for compute supply.

What is an AI Compute Marketplace

An AI compute marketplace is a platform that connects GPU computing power supply with AI workloads through a market-based coordination mechanism, rather than relying on a single centralized service provider for compute allocation.

From an overall structural perspective, such platforms primarily aggregate two types of participants:

  • Compute Providers: Including data centers, enterprises, miners, or individuals with idle computing power.
  • Compute Demanders: Including AI startup teams, research institutions, inference service providers, and model developers.

In a compute marketplace, the platform layer is responsible for the discovery, pricing, scheduling, and settlement of compute resources. Unlike traditional cloud services, the infrastructure is no longer monopolized by a single entity; hardware ownership, task execution, and pricing power are split and redistributed.

In this process, tokens may serve multiple functional roles, such as:

  • Settlement for compute usage
  • A tool for access control
  • Coordinating incentive mechanisms between supply and demand

However, it's important to note that the importance of tokens varies across platforms depending on the specific architectural design; there is no uniform standard.

From the perspective of trading platforms and market structure, AI compute marketplaces belong to an independent infrastructure category. They are neither AI applications nor consumer-facing products. Their core value lies in whether they can sustainably and stably coordinate the compute supply-demand relationship at scale.

How Decentralized GPU Marketplaces Operate

Although the specific implementation paths vary, most decentralized compute marketplaces typically revolve around a relatively consistent structural hierarchy. Understanding a project's focus within this structure is a key factor that cannot be ignored when evaluating AI compute tokens.

Supply Layer

At the supply layer, the platform is responsible for connecting GPU resources from distributed compute providers. Taking Akash Network as an example, it aggregates idle computing power from globally independent operators, transforming originally dispersed, underutilized hardware resources into an open compute pool that developers can directly call upon.

Marketplace Layer

The core function of the marketplace layer is to match specific computing tasks with available GPUs. Render Network demonstrates a typical form of this mechanism, allocating GPU tasks based on node availability and performance metrics through network coordination, replacing the traditional centralized scheduling model.

Execution Layer

At the execution layer, computing tasks run in isolated environments. io.net emphasizes coordinating AI workloads across heterogeneous GPU infrastructure through containerized execution and a unified scheduling system, while ensuring isolation and stability between different tasks.

Settlement Layer

The settlement layer is used to measure compute usage and coordinate payments. Golem provides an example of usage-based settlement, where the platform pays compute providers based on task completion, rather than pre-declared compute capacity, aligning incentives more closely with actual delivered results.

io.net (IO): A GPU Compute Marketplace Centered on Aggregation

io.net (IO) represents an AI compute marketplace path that prioritizes compute aggregation. Its core idea is to massively integrate dispersed GPU resources and present these compute capabilities to demanders in a unified manner, akin to cloud services.

This design highly values user experience. Developers can directly invoke compute resources without having to interface with individual hardware providers one by one, significantly lowering the barrier to entry and accelerating the pace of integration and deployment.

The main advantages of this model include:

  • Faster compute scheduling and delivery
  • User experience close to traditional cloud services, with a lower learning curve
  • Potential access to a scaled, centralized compute supply pool

Simultaneously, the aggregation model introduces new dependencies. The quality of compute providers, the stability of node operations, and their long-term willingness to participate directly impact overall service performance. Furthermore, the aggregated compute model can only remain economically viable if there is sustained real demand.

For IO, the core question is: Can compute aggregation maintain predictable performance and stable compute utilization while continuing to scale?

iExec (RLC): A Compute Marketplace Centered on Security and Verifiable Execution

iExec (RLC) represents a compute marketplace model oriented towards secure execution. Its focus is not on massively aggregating GPU resources, but on providing a trusted off-chain computing execution environment for AI and data-intensive workloads.

Unlike cloud-like compute aggregation solutions, iExec emphasizes Trusted Execution Environments (TEE) and verifiable computing mechanisms. Developers can complete computing tasks off-chain while achieving access control, settlement, and execution result verification through on-chain mechanisms. Therefore, iExec is often seen as infrastructure suitable for workloads with higher requirements for data integrity, privacy protection, and execution trustworthiness, not merely pursuing ultimate compute performance.

The main advantages of this model include:

  • Supports verifiable, secure off-chain computation.
  • Access to compute resources through market mechanisms.
  • Clear distinction between coordination and execution layers, with well-defined architectural boundaries.

Simultaneously, this model involves corresponding trade-offs. The scalability of computational performance and GPU availability depend on the scale and quality of participating compute providers, and iExec is not designed as a general-purpose GPU cloud service.