Anthropic AI an ‘unacceptable risk’ to military, US govt says

Published March 18, 2026
US Department of War and Anthropic logos are seen in this illustration taken March 1, 2026. — Reuters/File
US Department of War and Anthropic logos are seen in this illustration taken March 1, 2026. — Reuters/File

Artificial intelligence company Anthropic posed an “unacceptable risk” to military supply chains, the US government insisted on Tuesday, as it defends against the tech firm’s challenge to a designation as dangerous.

Anthropic’s Claude AI model has been in the spotlight in recent weeks both for its alleged use in identifying targets for US bombing in Iran and the company’s refusal that its systems be used to power mass surveillance in the United States or lethal fully autonomous weapons systems.

Justifying its decision to cut ties with Anthropic in response to a legal complaint from the firm, the Pentagon — dubbed the Department of War (DoW) by the Trump administration — said it “became concerned that allowing Anthropic continued access to DoW’s technical and operational warfighting infrastructure would introduce unacceptable risk into DoW supply chains,” in a court document seen by AFP.

“AI systems are acutely vulnerable to manipulation,” the government added in the filing to a California federal court.

“Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic — in its discretion — feels that its corporate ‘red lines’ are being crossed,” it said.

Anthropic’s refusal to agree that its AI tech could be deployed by the military for “any lawful use” therefore posed an “unacceptable risk to national security,” the document read.

“Anthropic’s behavior more generally caused the Department to question whether Anthropic represented a trusted partner,” the government said.

Classification as a “supply chain risk,” which Anthropic has challenged in a case against the Pentagon and other arms of the federal government, in theory means that all government suppliers would be barred from doing business with the company.

The designation is typically reserved for organisations from foreign adversary countries, such as Chinese tech giant Huawei.

Other major American tech firms such as Microsoft, which itself both uses Anthropic’s Claude model and supplies the US military, have weighed in on the AI company’s side.

“This is not the time to put at risk the very AI ecosystem that the administration has helped to champion,” Microsoft said in an amicus brief filed with the court last week.

Opinion

Editorial

Momentary relief
Updated 10 May, 2026

Momentary relief

THE IMF’s approval of the latest review of Pakistan’s ongoing Fund programme comes at a moment of growing global...
India’s global shame
10 May, 2026

India’s global shame

INDIA’s rabid streak is at an all-time high. Prejudice is now an organised movement to erase religious freedoms ...
Aurat March restrictions
Updated 10 May, 2026

Aurat March restrictions

The message could not have been clearer: women may gather, but only if they remain politically harmless.
Removing subsidies
Updated 09 May, 2026

Removing subsidies

The government no longer has the budgetary space to continue carrying hundreds of billions of rupees in untargeted subsidies while the power sector itself remains trapped in circular debt, inefficiencies, theft and under-recovery.
Scarred at home
09 May, 2026

Scarred at home

WHEN homes turn violent towards children, the psychosocial damage is lifelong. In Pakistan, parental violence is...
Zionist zealotry
09 May, 2026

Zionist zealotry

BOTH the Israeli military and far-right citizens of the Zionist state have been involved in appalling hate crimes...