BTC
ETH
HTX
SOL
BNB
View Market
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt

Anthropic and OpenAI Adjust Safety Rule Statements Amid Accelerating AI Competition

2026-02-26 01:45

Odaily News Anthropic has removed a core safety commitment from its responsible scaling policy, no longer pledging to pause training if risk mitigation measures are not fully in place. Anthropic's Chief Science Officer, Jared Kaplan, told TIME that in the context of rapid AI development, if competitors continue to advance, making a unilateral commitment to halt training holds no practical significance.

OpenAI has similarly revised its mission statement, removing the word "safely" from its 2024 IRS filing. The previous wording was to build a general AI that "safely benefits humanity," which has now been changed to "ensuring that artificial general intelligence benefits all of humanity."

Edward Geist, a senior policy researcher at the RAND Corporation, stated that the advanced AI envisioned by early AI safety advocates is fundamentally different from current large language models. The change in terminology also reflects companies' desire to signal to investors and policymakers that they will not retreat from economic competition due to safety concerns.

Anthropic recently completed a $30 billion financing round, with a valuation of approximately $380 billion; OpenAI is advancing a financing round of up to $100 billion supported by Amazon, Microsoft, and Nvidia. Meanwhile, Anthropic has publicly clashed with U.S. Secretary of Defense Pete Hegseth over its refusal to grant the Pentagon full access to Claude, casting uncertainty over its defense contracts. (Decrypt)