They will also lose the California case.
Court Denies Anthropic Request to End Defense Department Punishment
The company is involved in two separate legal actions related to being blacklisted by the Pentagon
By
Amrith Ramkumar and Heather Somerville
April 8, 2026 7:37 pm ET
Gift unlocked article
A federal appeals court denied Anthropic’s request for relief from the Defense Department’s supply-chain risk designation.
View more
WASHINGTON—A federal appeals court on Wednesday denied Anthropic’s request for relief from the Defense Department declaring the company a supply-chain risk, complicating the legal battle between the U.S. government and one of the country’s leading artificial-intelligence companies.
While Anthropic has sustained financial harm from the Pentagon’s actions, the appeals court said that it didn’t feel strongly enough to override the government on a matter of national security. A separate court in California recently granted Anthropic a preliminary injunction to stop President Trump’s broader ban on all federal agencies using the company’s Claude models.
Wednesday’s decision means that the Defense Department’s designation of Anthropic as a security threat stands, so the company will continue to be excluded from new contracts and Pentagon systems. In late February, Trump gave the Defense Department six months to transition away from Claude, which is being used in the war in Iran.
“On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict,” the appeals court said in its decision.
The case will now proceed alongside the government’s appeal of the California court ruling, creating legal uncertainty for companies that work with both Anthropic and the government. Some companies have said that they would stop using Claude in their government work to avoid any potential issues.
“The D.C. Circuit’s denial will prolong ambiguities regarding whether political considerations can drive federal procurement,” said Matt Schruers, chief executive of the Computer & Communications Industry Association.
The Pentagon applied the supply-chain risk designation against Anthropic using two different statutes, one of which is under the jurisdiction of the Washington appeals court.
Defense Secretary Pete Hegseth designated the company a supply-chain risk after attempts to renegotiate its contract with Anthropic fell apart. Anthropic had sought assurances that its models wouldn’t be used in fully autonomous weapons or for domestic surveillance. The Pentagon countered that such prohibitions were unnecessary because military policies or laws already restricted such uses, and pushed for an agreement in which the military could use the AI in all legal applications.
“Today’s D.C. Circuit stay allowing the government to designate Anthropic as a supply chain risk is a resounding victory for military readiness,” said Todd Blanche, the acting attorney general.
The supply-chain risk designation is normally used on companies from U.S. adversaries such as China that pose security threats, making the Defense Department’s move against Anthropic nearly unprecedented.
Some of Anthropic’s competitors including OpenAI and Elon Musk’s xAI have since agreed to the Pentagon’s terms.
The company has said it could lose revenue and investors because of the government’s actions, arguments that the appeals court acknowledged in its decision.
“We’re grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply-chain designations were unlawful,” an Anthropic spokeswoman said. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
The risks posed by AI were illustrated again on Tuesday, when Anthropic said it is making a preview version of its new AI model available to about 50 companies and organizations that maintain critical infrastructure, including U.S. tech giants, before releasing it publicly. The company said it worried that releasing it now could cause widespread disruptions online.
Link: Anthropic
Not really a good pick for defense work.
Link: https://www.cpomagazine.com/cyber-security/taking-stock-of-the-anthropic-source-code-leak-ai-agent-compromise-signals-security-issues-claude-copies-ahead-of-massive-ipo/
...Burry noted Anthropic's explosive growth from $9B to $30B in annual recurring revenue (ARR) in just months. That's proof that businesses are pivoting toward "easier, cheaper, (and more) intuitive solutions", he said.
Guess Anthropic's refusal to let Trump's DOD use its products for domestic spying hasn't hurt its appeal in the business community.
btw, the news isn't going to make JD Vance's 'Handler', Peter Thiel, very happy...given his co-founding ownership of Palantir.
Link: https://www.businessinsider.com/michael-burry-anthropic-palantir-short-ai-stocks-tech-pltr-claude-2026-4
(no message)
Link: https://youtu.be/-FbDvcSeqKg?si=6OzxH3Ogi5lJTVXM
But your greater point is correct because it is the wild wild West and everyone is racing to win the next frontier. State control of a god-like model should scare the shit out of everyone.
Anthropic is the without a doubt AI leader at the moment, and they don't understand how their models work much deeper than the general public.
Link: https://www.businessinsider.com/anthropic-mythos-latest-ai-model-too-powerful-to-be-released-2026-4
Consent Management