BTC
ETH
HTX
SOL
BNB
View Market
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt

What brought GPT and Claude together was their shared opposition to the Pentagon?

深潮TechFlow
特邀专栏作者
2026-02-28 07:25
This article is about 2818 words, reading the full article takes about 5 minutes
This time they joined forces, but it likely only lasted a few hours.
AI Summary
Expand
  • Core Viewpoint: This article reveals the ethical and commercial dilemmas AI companies face when collaborating with the U.S. government. Using the examples of Anthropic being sanctioned by the Department of Defense for adhering to specific AI usage principles while OpenAI successfully secured a contract, it illustrates how tech companies' principled stances can lead to vastly different outcomes in the face of national security issues, depending on their political posture and negotiation strategies. At its core, this is about the government using its market access power to establish rule-making dominance.
  • Key Elements:
    1. Event Origin: Anthropic insisted on two red lines—"prohibition of mass surveillance of citizens" and "prohibition of fully autonomous weapons"—and advocated for closing legal loopholes in a $200 million contract with the U.S. military. This led to rejection by the Department of Defense and an ultimatum.
    2. Diverging Stances and Brief Alliance: After Anthropic refused to compromise, OpenAI employees voiced support in a joint statement, and its CEO declared holding the same red lines, showing initial industry solidarity on ethical issues.
    3. Escalation of Sanctions: Following Anthropic's refusal to sign, it was designated by the Department of Defense as a "supply chain security risk" (similar to Huawei's treatment) and publicly criticized by former President Trump, facing the risk of exclusion from the entire federal supply chain.
    4. Competitor Gains: OpenAI quickly reached an agreement with the Pentagon, filling the gap left by Anthropic. However, the accepted terms did not include the "closing legal loopholes" clause that Anthropic had insisted on.
    5. Political and Ideological Factors: The article notes that Anthropic was criticized by some officials as "woke AI," and its CEO's political leanings (not attending Trump's inauguration) may have influenced the negotiation outcome, indicating the controversy extends beyond purely commercial or technical realms.
    6. Industry Warning Signal: The core impact of this event is to "make an example of one to warn a hundred." By sanctioning Anthropic, the government aims to assert its authority clearly, forcing other AI companies to perform "rational" business calculations when facing similar choices in the future, potentially suppressing industry-wide ethical resistance.

Original Author: Kuli, Shenchao TechFlow

A photo went viral online a few days ago.

India held an AI summit, where Prime Minister Modi stood on stage flanked by a row of Silicon Valley titans. During the group photo, Modi raised the hand of the person next to him overhead, and others followed suit, linking hands in a show of unity.

But, two people did not join hands.

image

The CEOs of OpenAI and Anthropic—the leaders behind ChatGPT and Claude, respectively—stood next to each other, each raising a fist.

No hand-holding, no eye contact, like two rivals forced by a teacher to share a desk.

These two companies have been locked in fierce competition for years. The team behind Claude split from OpenAI. They've been battling for users, enterprise clients, and funding. During this year's Super Bowl, Anthropic even paid for ads mocking ChatGPT's plans to introduce advertising.

So, not holding hands? Understandable.

But today, they joined hands. Because of the Pentagon.

Here's what happened.

Anthropic, the company behind Claude, signed a contract with the U.S. Department of Defense last year worth up to $200 million. Claude became the first AI model deployed on the U.S. military's classified networks, assisting with tasks like intelligence analysis and mission planning.

But Anthropic drew two red lines in the contract:

Claude cannot be used for mass surveillance of U.S. citizens, nor for autonomous weapons without human involvement. (Reference: Anthropic's 72-Hour Identity Crisis)

The Pentagon, however, did not accept this.

Their demand was simple: unrestricted use. Once they buy a tool, they should be able to use it freely. What right does a tech company have to tell the U.S. military what it can and cannot do?

Last Tuesday, Defense Secretary Hegseth delivered an ultimatum to Anthropic's CEO in person: agree by 5:01 PM on Friday, or face the consequences.

Anthropic did not agree.

Their CEO issued a public statement, essentially saying: We deeply understand the importance of AI to U.S. national defense, but in a few cases, AI can undermine rather than defend democratic values. We cannot in good conscience accept this demand.

image

The Pentagon's lead negotiator, Deputy Defense Secretary Emil Michael, subsequently called him a liar on social media, accusing him of having a God complex and playing games with national security.

A Brief Handshake

Then, the unexpected happened.

Over 400 employees from OpenAI and Google signed a joint open letter titled "We Will Not Be Divided."

The letter stated that the Pentagon was negotiating with AI companies one by one, trying to get others to agree to the terms Anthropic refused, using fear to divide each company.

OpenAI's CEO also sent an internal memo to all employees, stating that OpenAI shares the same red lines as Anthropic:

No mass surveillance, no autonomous lethal weapons.

image

The two companies that refused to hold hands just days ago suddenly found themselves on the same side because of the Pentagon.

But this solidarity likely lasted only a few hours.

At 5:01 PM on Friday, the Pentagon's ultimatum expired. Anthropic did not sign.

A U.S. tech company valued at $380 billion risked voiding a $200 million contract and refused the U.S. Department of Defense. In the past, such an incident would have at most resulted in contract termination and finding a new supplier. But Washington's reaction this time was far from a mere commercial dispute.

Trump posted on Truth Social about an hour later, calling Anthropic "left-wing lunatics," accusing them of trying to place themselves above the Constitution and playing with the lives of American soldiers.

He demanded all federal agencies immediately cease using Anthropic's technology.

Shortly after, U.S. Defense Secretary Hegseth announced that Anthropic would be designated a "supply chain security risk." This label is typically reserved for companies like Huawei. The message was clear: any contractor doing business with the U.S. military could no longer use Anthropic's products.

image

Anthropic said it would take legal action.

That same evening, OpenAI, which had previously stood in solidarity, signed an agreement with the Pentagon.

An Ideological Issue

What did OpenAI get?

The position left vacant after Claude was kicked out: the AI supplier for the U.S. military's classified networks. However, OpenAI presented the Pentagon with three conditions: no mass surveillance, no autonomous weapons, and human oversight for high-risk decisions.

The Pentagon said, okay.

You read that right. Conditions the Pentagon refused to accept from Anthropic after weeks of negotiation were agreed upon with another company in just days?

Of course, the proposals weren't exactly the same.

Anthropic asked for an extra layer: they argued that current laws can't keep up with AI's capabilities. For example, AI can legally purchase and aggregate your location data, browsing history, and social media information, achieving an effect equivalent to surveillance, with each step being legal.

Anthropic said simply writing "no surveillance" is useless; this loophole needs to be closed. OpenAI did not insist on this point; they accepted the Pentagon's argument that existing laws are sufficient.

But if you think this is just a disagreement over terms, you're being naive. This negotiation was never just about the terms from the start.

The White House's AI czar, David Sacks, had already publicly criticized Anthropic for promoting "woke AI" (ideology-first, political correctness). Senior Pentagon officials told the media that Dario's (Anthropic CEO Dario Amodei) problem was ideologically driven: "We know who we're dealing with."

Elon Musk's xAI, a direct competitor to Anthropic, repeatedly attacked the company on X this week, saying it "hates Western civilization."

And Anthropic's CEO did not attend Trump's inauguration last year. OpenAI's CEO did.

image

Making an Example

So let's recap what happened.

Same principles, same red lines. But because Anthropic asked for an extra layer of protection, was on the wrong side politically, and struck the wrong posture, it was labeled a threat to U.S. national security on par with Huawei.

OpenAI asked for less, maintained better relations, and got the contract. Is this a victory for principles, or the pricing of principles?

This isn't the first time a Pentagon contract has faced resistance.

In 2018, over 4,000 Google employees signed a petition, with more than a dozen resigning, to protest the company's involvement in a Pentagon project called Project Maven. That project used AI to analyze drone footage, helping the military identify targets faster.

Google ultimately withdrew. They didn't renew the contract and walked away. The employees won.

Eight years later, the same controversy has resurfaced. But the rules have completely changed. An American company says it can do business with the military, but there are two things it won't do. The U.S. government's response is to kick it out of the entire federal system.

And the damage from the "supply chain security risk" label far exceeds losing a $200 million contract.

Anthropic's revenue this year is estimated at around $14 billion; the $200 million contract is a drop in the bucket. But this label means any company doing business with the U.S. military cannot use Claude.

These companies don't need to agree with the Pentagon's stance; they just need to perform a risk assessment: continue using Claude and potentially lose government contracts, or switch to another model with no issues.

The choice is easy. That's the real signal from this event.

Whether Anthropic can withstand this isn't the most important thing. What matters is whether the next company will dare to stand firm. It will look at this outcome, weigh the cost of sticking to principles, and make a very rational decision.

Looking back at that photo from India, everyone's hands are linked and raised overhead, except for those two, each holding a fist.

Perhaps that's the norm.

AI companies can share the same principles, but their hands won't necessarily join.

AI
Welcome to Join Odaily Official Community