This isn't a vague industry
disagreement. Two of the most powerful people in AI are now directly, personally, calling each other out. In writing.
Dario Amodei sent a 1,600-word memo to
Anthropic staff last week. He called OpenAI's Pentagon deal "safety theater." He called Sam Altman's public statements about it "straight up lies." He said OpenAI signed the contract to keep its own employees happy — not because it actually cared about safety.
That's a named attack. From a sitting CEO. In a written memo. It doesn't get more serious than this.
Key Takeaways:
- Anthropic CEO Dario Amodei sent a 1,600-word staff memo calling OpenAI's Pentagon deal "safety theater" and Altman's public statements "straight up lies"
- Anthropic's own DoD talks collapsed after the Pentagon demanded unrestricted "lawful use" access — OpenAI signed a nearly identical deal hours later
- Amodei's core legal argument is substantive: current US law permits data-based domestic surveillance without warrants, making "lawful use" contract language far weaker than it appears
- Altman admitted the deal looked "opportunistic and sloppy" and later revised contract language — but critics argue the revision came too late to matter
- Anthropic's moral high ground is complicated by prior Pentagon ties: the DoD reportedly used Claude to assist in selecting military targets in Iran under an earlier Anthropic contract
How the Pentagon Deal Blew Everything Up
Anthropic had its own Department of Defense deal on the table. Worth $200 million. The talks collapsed when the Pentagon demanded unrestricted access to Anthropic's AI for "any lawful purpose."
Anthropic said no. Two hard limits: no mass domestic surveillance, no autonomous weapons without human oversight. The DoD wouldn't budge. The deal died.
Hours later, OpenAI stepped in. Same contract. Signed it.
Altman said publicly that OpenAI's version included the same protections Anthropic had demanded. Amodei's response was blunt. That's not true. He said the difference between the two companies came down to one thing: Anthropic actually meant it.
OpenAI didn't.
The Legal Loophole Nobody Talked About
Here's where it gets genuinely interesting.
The contract language bans "unlawful" surveillance. Sounds safe, right? But here's the catch — mass surveillance of US citizens is already legal under current law. Government agencies can buy data from private sources without a warrant. No law broken. No contract violated.
That's Amodei's real argument. "Lawful use" is a weak protection when the law itself has wide gaps. Altman later revised the contract to explicitly ban domestic surveillance of US persons. But critics argued that move came too late and too quietly.
Altman admitted the whole thing looked "opportunistic and sloppy." That's a rare moment of public self-criticism. It didn't fully land.
Nobody's Hands Are Spotless Here
Amodei positioned this as a principled stand. But the full picture is messier.
Anthropic's own prior Pentagon deal reportedly allowed the DoD to use Claude to help select military targets in Iran. That's a hard fact to ignore when you're painting yourself as the safety-first AI company.
Amodei also told staff that Anthropic was sidelined by the Trump administration because it hadn't offered enough public praise toward the President. He contrasted that directly with how Altman operates. Make of that what you will.
The user reaction to OpenAI's Pentagon deal was swift and measurable. ChatGPT uninstalls surged dramatically after the announcement. Anthropic climbed to number two in the App Store almost immediately. Amodei noted both facts in the memo. He wasn't subtle about the timing.
This fight is about contract language on the surface. Underneath? It's about who gets to own the "responsible AI" label — and what that label is actually worth when government money is on the table.