Anthropic takes US govt to court over ‘security risk’ designation

Published March 10, 2026
Anthropic logo is seen in this illustration taken on May 20, 2024. — Reuters/File Photo
Anthropic logo is seen in this illustration taken on May 20, 2024. — Reuters/File Photo

WASHINGTON: Anthropic filed suit on Monday against the Trump administration, alleging it retaliated against the AI company for refusing to let its Claude AI model be used for autonomous lethal warfare and mass surveillance of Americans.

In the 48-page complaint, filed in federal court in San Francisco, Anthropic seeks to have its designation as a national security supply-chain risk declared unlawful and blocked.

In its lawsuit, Anthropic said the company was founded on the belief that its AI is “used in a way that maximises positive outcomes for humanity” and should “be the safest and the most responsible”.

“Anthropic brings this suit because the federal government has retaliated against it for expressing that principle,” the lawsuit says.

OpenAI manager resigns to protest Pentagon deal

Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organisations from foreign adversary countries, such as Chinese tech giant Huawei.

The designation requires defence vendors and contractors to certify that they do not use Anthropic’s models in their work with the Pentagon.

“The consequences of this case are enormous,” the lawsuit states, with the government “seeking to destroy the economic value created by one of the world’s fastest-growing private companies”.

The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.

President Donald Trump ordered every federal agency to cease all use of Anthropic’s technology. Hours later, Hegseth designated Anthropic a “supply-chain risk to national security” and ordered that no military contractor, supplier or partner “may conduct any commercial activity with Anthropic”, while allowing a six-month transition period for the Pentagon itself.

The row erupted days before the US military attack against Iran.

Claude is the Pentagon’s most widely deployed frontier AI model and the only such model currently operating on the Department of War’s classified systems.

In its lawsuit, Anthropic argues the actions taken against it violate the First Amendment by punishing the company for protected speech on AI safety policy, exceed the Pentagon’s statutory authority, and deprive it of due process under the Fifth Amendment.

“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the complaint states.

The company is seeking a permanent injunction blocking enforcement of the designation.

OpenAI manager resigns

A robotics manager at OpenAI said on Saturday that she had resigned over the artificial intelligence giant’s deal with the US government to allow its technology’s deployment for war and domestic surveillance.

The company behind ChatGPT secured a defence contract with the Pentagon last month, hours after rival Anthropic refused to agree to unconditional military use of its technology.

OpenAI’s CEO Sam Altman later posted to X saying it would be modifying a contract so its models would not be used for “domestic surveillance of US persons and nationals,” after criticism it was giving too much power to military officials without oversight.

Caitlin Kalinowski, a manager of the hardware team in the robotics division, posted on X that she cared deeply about “the robotics team and the work we built together”.

However, “surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got”.

“This was about principle, not people,” she said.

Kalinowski wrote in a follow-up post that she took issue with the haste of OpenAI’s Pentagon deal.

“To be clear, my issue is that the announcement was rushed without the guardrails defined,” she wrote.

“It’s a governance concern first and foremost. These are too important for deals or announcements to be rushed.” Kalinowski previously worked at Meta, developing their augmented reality glasses.

Published in Dawn, March 10th, 2026

Opinion

Editorial

Momentary relief
Updated 10 May, 2026

Momentary relief

THE IMF’s approval of the latest review of Pakistan’s ongoing Fund programme comes at a moment of growing global...
India’s global shame
10 May, 2026

India’s global shame

INDIA’s rabid streak is at an all-time high. Prejudice is now an organised movement to erase religious freedoms ...
Aurat March restrictions
Updated 10 May, 2026

Aurat March restrictions

The message could not have been clearer: women may gather, but only if they remain politically harmless.
Removing subsidies
Updated 09 May, 2026

Removing subsidies

The government no longer has the budgetary space to continue carrying hundreds of billions of rupees in untargeted subsidies while the power sector itself remains trapped in circular debt, inefficiencies, theft and under-recovery.
Scarred at home
09 May, 2026

Scarred at home

WHEN homes turn violent towards children, the psychosocial damage is lifelong. In Pakistan, parental violence is...
Zionist zealotry
09 May, 2026

Zionist zealotry

BOTH the Israeli military and far-right citizens of the Zionist state have been involved in appalling hate crimes...