Gonka Unveils PoC Mechanism and Model Evolution Direction: Aligning with Real Computing Power to Ensure Multi-Level GPU Continuous Participation
Odaily News Decentralized AI computing power network Gonka recently explained the phased adjustments to its PoC mechanism and model operation methods during a community AMA. The key adjustments include: unifying the use of the same large model for both PoC and inference, changing the PoC activation method from delayed switching to near real-time triggering, and optimizing the computing power weight calculation to better reflect the actual computational costs of different models and hardware.
Co-founder David stated that these adjustments are not aimed at short-term output or individual participants. Instead, as the network's computing power scale rapidly expands, they represent a necessary evolution of the consensus and verification structure. The goal is to enhance the network's stability and security under high-load conditions, laying the groundwork for supporting larger-scale AI workloads in the future.
Addressing community discussions about small models currently yielding higher token outputs, the team pointed out that models of different scales have significantly different real computing power consumption for the same number of tokens. As the network evolves towards higher computing power density and more complex tasks, Gonka is gradually aligning computing power weights with actual computational costs. This aims to prevent long-term imbalances in the computing power structure, which could affect the network's overall scalability.
Under the latest PoC mechanism, the network has compressed the PoC activation time to within 5 seconds, reducing computing power waste caused by model switching and waiting. This allows GPU resources to be used for effective AI computation at a higher ratio. Simultaneously, by unifying model operation, the system overhead for nodes switching between consensus and inference is reduced, improving overall computing power utilization efficiency.
The team also emphasized that single-card and small-to-medium-scale GPUs can continue to earn rewards and participate in governance through methods such as mining pool collaboration, flexible participation by Epoch, and inference tasks. Gonka's long-term goal is to support the long-term coexistence of computing power at different levels within the same network through mechanism evolution.
Gonka stated that all key rule adjustments are advanced through on-chain governance and community voting. In the future, the network will gradually support more model types and AI task formats, providing continuous and transparent participation space for GPUs of various scales globally, and promoting the long-term healthy development of decentralized AI computing power infrastructure.
