Gradient Releases Echo-2 RL Framework, Boosting AI Research Efficiency
Odaily News Distributed AI lab Gradient today released the Echo-2 distributed reinforcement learning framework, aiming to break through the efficiency barriers in AI research training. By achieving complete decoupling of Learner and Actor at the architectural layer, Echo-2 slashed the post-training cost for a 30B model from $4,500 to $425. Under the same budget, it delivers over a 10x increase in research throughput.
The framework utilizes compute-storage separation technology for asynchronous training (Async RL), offloading massive sampling computational power to unstable GPU instances and heterogeneous GPUs based on Parallax. Combined with technological breakthroughs such as bounded staleness, instance fault-tolerant scheduling, and the self-developed Lattica communication protocol, it significantly enhances training efficiency while ensuring model accuracy. Alongside the framework release, Gradient is also set to launch the RLaaS platform Logits, promoting a paradigm shift in AI research from "capital accumulation" to "efficiency iteration." Logits is now open for global students and researchers to pre-register (logits.dev).
It is reported that Gradient is an AI lab dedicated to building distributed infrastructure, focusing on the distributed training, serving, and deployment of cutting-edge large models.
