Moral Intelligence: Anthropic vs the DoD
The Ugly, Public Fight Between an AI Giant and the World's Largest Bureaucracy, and What Could Come Next
On Friday night, Secretary of Defense Pete Hegseth — backed by a scathing social media broadside from President Donald Trump — didn’t just criticize one of the biggest AI companies in the world. He began the process of making it a national pariah.
Hegseth and Trump allege that Anthropic’s ideological commitments justify the takedown. What they’re actually working toward, though, is unprecedented government action against a private American company — action that could threaten its very existence.
The Relationship That Was
Anthropic’s models, namely Claude, have been integrated into Department of Defense operations for some time. The company is one of four major AI firms — alongside OpenAI, Google, and xAI — building out national security applications. But until this week, Anthropic was the only AI company with access to the Pentagon’s classified systems. It was also revealed that Claude played an essential role in the raid in Venezuela that captured President Nicolás Maduro.
As Claude became more deeply embedded in military capabilities, the Defense Secretary pressed hard for Anthropic to lift all safety guardrails — to ensure, in the administration’s framing, maximum lethality that would unlock AI’s full benefit to the United States and save the lives of countless service members.
Anthropic, and by extension its CEO Dario Amodei, rebuked them. Again and again.
Claude operates under two hard limits, purposefully implemented by the company:
First, their technology is not to be used for mass surveillance of American citizens — a violation of the right to privacy and general human dignity.
Second, AI should not run autonomous weapons systems without human oversight. This one is self-explanatory, and coincidentally, reinforced by a study released this week showing that even the best frontier AI models recommended nuclear weapons in 95% of wargaming exercises that involved them.
Hegseth was unmoved. The negotiations turned ugly, then public.
The Nuclear Option
When Anthropic refused to budge, the DoD didn’t just terminate their Pentagon contract. They began the process of labeling Anthropic a Supply Chain Risk — a designation reserved for companies of foreign adversaries whose products could compromise national security.
This is an extraordinary escalation. Under this label, any company doing business with the DoD would be expected to purge Anthropic’s technology from its systems. Claude is deeply integrated into Microsoft Azure and Amazon Web Services cloud infrastructure, both of which are major defense contractors. They would have to strip Claude from their servers to continue doing business in American national security.
In sticking to their guns, Anthropic’s leaders may have sealed their fate.
The Legal Battlefield
What happens next will be consequential — not just for Anthropic, but for every private entity in the United States that faces the wrath of an uncompromising president.
In the immediate term, Anthropic is likely to view this as a secondary boycott targeting their civilian business. They will almost certainly see it as a violation of their rights as a company.
Expect legal action against the Defense Department, most likely in the form of an Administrative Procedure Act (APA) lawsuit. Their opening move will be to secure an emergency preliminary injunction blocking the DoD from enforcing the supply chain risk designation. They’ll argue the government abused its discretion and acted arbitrarily.
The case for irreparable harm is easy to make — just look at the Azure and AWS exposure alone. The case on the merits is strong, too. Federal judges, especially conservative ones, dislike it when executive agencies stretch statutes beyond their written limits. It would be a tough sell for the Defense Department to argue that a law designed to secure military supply chains legally authorizes the government to execute a secondary boycott against a domestic company’s entirely separate civilian business.
Almost certainly, a federal judge grants the injunction — temporarily freezing Hegseth’s ban on contractors’ commercial activity while the larger suit plays out.
The Government’s Defense
The Department of Justice will defend Hegseth’s order by leaning on extreme judicial deference to the Executive Branch in matters of national security. The government will likely argue that if a defense contractor uses Anthropic for civilian purposes, it creates a porous network environment where an “ideologically compromised” AI could indirectly threaten DoD data.
Here’s the problem with that argument: the DoD explicitly stated the ban is rooted in Anthropic’s “ideological whims” and ethical terms of service — not an actual cyber-espionage threat. Courts will almost certainly view this as a punitive business measure, not a genuine national security emergency.
The Wild Card
There is a wilder scenario lurking beneath the surface. Before invoking the supply chain risk designation, the administration threatened to use the Cold War-era Defense Production Act to legally force Anthropic to strip away its safety guardrails against autonomous weapons and surveillance.
If the administration pivots and actually attempts this, the legal battle goes nuclear.
It would trigger a cascade of constitutional questions. Can the government use the DPA to compel a private company to write — or delete — code against its own ethical guidelines? Anthropic would mount a massive First Amendment (compelled speech) and Fifth Amendment (unlawful taking) defense. Most legal experts doubt the DPA can be stretched this far.
The Most Likely Outcome
The most plausible resolution is a split decision — one that ultimately preserves Anthropic’s relevance but doesn’t leave it unscathed.
The courts will almost certainly rule that Hegseth overstepped his statutory authority. The DoD cannot legally bar civilian companies from doing separate, private business with Anthropic. The blanket commercial ban will be struck down.
However, the courts will uphold Trump and Hegseth’s right to cancel the $200 million DoD contract and ban all federal agencies from purchasing Anthropic’s products. The government has near-total discretion over how it spends its own money and what software it installs on its own servers.
The Aftermath
Even with a legal victory, the chilling effect will do real damage. Major defense-adjacent companies may quietly distance themselves from Anthropic to avoid angering the Pentagon or jeopardizing their own government contracts. Meanwhile, OpenAI — which immediately endorsed the Pentagon’s plans and stepped in to claim the contracts — will cement itself as the dominant player in U.S. defense-tech AI.
But here’s the twist.
While losing a $200 million Department of Defense contract stings, being publicly exiled by the U.S. military specifically because you refused to build autonomous killing machines and mass surveillance tools is arguably the greatest marketing campaign ever handed to a company operating in the global market. That contract represents only 1.4% of Anthropic’s projected 2026 revenue. The fallout could catalyze an entirely new global landscape for AI: models aligned with defense and lethality on one side, and models aligned with private, civilian-focused entities on the other. Anthropic is positioned to lead the latter.
Their refusal to drop safety guardrails would be warmly received by the regulation-heavy European Union. While they may face competition from companies pushing European sovereignty, Anthropic could be fast-tracked for integration into government, healthcare, and financial systems across the continent — territory where companies like OpenAI struggle due to persistent privacy concerns.
There are allies in Canada, where ethics and safety in AI are paramount. A company untethered from the U.S. Military Industrial Complex will almost certainly win over Canadians. And there’s Africa — though the continent may lean toward open-source models like Meta’s Llama, there’s reason to believe Anthropic would want to be part of the new world being built there in the 21st century.
The Stakes Beyond Anthropic
If the federal courts allow Hegseth’s commercial ban to stand, it establishes a significant constitutional precedent: the Executive Branch can leverage national security procurement authority to effectively penalize a domestic company’s civilian business for refusing to comply with demands that fall outside the scope of its government contract. That is a question of separation of powers and statutory limits on executive authority that extends far beyond one AI company.
If the courts strike it down, the precedent cuts the other direction: there are enforceable limits on how far the government can reach into private commerce when a company exercises its legal right to set the terms of its own product.
Either way, the outcome will define the boundary between national security discretion and the constitutional protections afforded to private enterprise — and every company operating in the defense-adjacent space will be watching.
Further Reading:
Wargaming Study (King’s College London): AI Used Nuclear Signalling in 95% of Simulated Crises, King’s Study Finds
Anthropic & Microsoft Azure Integration: Introducing Anthropic’s Claude models in Microsoft Foundry
Anthropic & AWS Integration: Powering the next generation of AI development with AWS - Anthropic
CBS News Report: Pete Hegseth designates Anthropic as supply-chain risk amid feud


