Gonka v0.2.9 Mainnet Upgrade Completed, PoC v2 Officially Activated
Odaily News: The decentralized AI computing power network Gonka has completed its v0.2.9 mainnet upgrade. This upgrade was approved through on-chain governance voting and was officially executed at block height 2451000. The network has now fully switched to PoC v2 as the weight distribution mechanism, with the original PoC logic being phased out. This upgrade marks Gonka's entry into a higher stage of maturity in terms of computing power verification mechanisms and network governance.
After the upgrade takes effect, Confirmation PoC becomes the authoritative source for network results, further enhancing the verifiability and determinism of computing power contributions. Simultaneously, the network has entered a single-model operation phase. By unifying the model and verification standards, it reduces noise from heterogeneous computing power, providing a more stable infrastructure environment for decentralized AI inference and training. Currently, only ML Nodes running Qwen/Qwen3-235B-A22B-Instruct-2507-FP8 and using PoC v2-compatible images can participate in weight calculations. The transition period from Epoch 158 to 159 will be the first complete operational phase following the activation of PoC v2.
According to real-time data from GonkaScan, as of the time of writing on February 2, 2026, Gonka's total network computing power is approaching the equivalent of 14,000 H100 GPUs, exhibiting the scale characteristics of a national-level AI computing power cluster. Compared to the approximately 6,000 H100 equivalents when Bitfury announced a $50 million investment in early December 2025, the network's computing power scale shows a monthly growth rate of about 52%, placing its growth pace at a leading level among similar decentralized computing power networks.
In terms of computing power structure, high-end GPUs such as NVIDIA H100, H200, and A100 account for over 80% of the network's total computing power, demonstrating Gonka's significant advantage in aggregating and scheduling high-performance computing resources. Currently, network nodes cover approximately 20 countries and regions across Europe, Asia, the Middle East, and North America, laying the foundation for building a global, single-point-of-failure-resistant AI computing power infrastructure.
