Anthropic takes US govt to court over ‘security risk’ designation

Published March 10, 2026
Anthropic logo is seen in this illustration taken on May 20, 2024. — Reuters/File Photo
Anthropic logo is seen in this illustration taken on May 20, 2024. — Reuters/File Photo

WASHINGTON: Anthropic filed suit on Monday against the Trump administration, alleging it retaliated against the AI company for refusing to let its Claude AI model be used for autonomous lethal warfare and mass surveillance of Americans.

In the 48-page complaint, filed in federal court in San Francisco, Anthropic seeks to have its designation as a national security supply-chain risk declared unlawful and blocked.

In its lawsuit, Anthropic said the company was founded on the belief that its AI is “used in a way that maximises positive outcomes for humanity” and should “be the safest and the most responsible”.

“Anthropic brings this suit because the federal government has retaliated against it for expressing that principle,” the lawsuit says.

OpenAI manager resigns to protest Pentagon deal

Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organisations from foreign adversary countries, such as Chinese tech giant Huawei.

The designation requires defence vendors and contractors to certify that they do not use Anthropic’s models in their work with the Pentagon.

“The consequences of this case are enormous,” the lawsuit states, with the government “seeking to destroy the economic value created by one of the world’s fastest-growing private companies”.

The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.

President Donald Trump ordered every federal agency to cease all use of Anthropic’s technology. Hours later, Hegseth designated Anthropic a “supply-chain risk to national security” and ordered that no military contractor, supplier or partner “may conduct any commercial activity with Anthropic”, while allowing a six-month transition period for the Pentagon itself.

The row erupted days before the US military attack against Iran.

Claude is the Pentagon’s most widely deployed frontier AI model and the only such model currently operating on the Department of War’s classified systems.

In its lawsuit, Anthropic argues the actions taken against it violate the First Amendment by punishing the company for protected speech on AI safety policy, exceed the Pentagon’s statutory authority, and deprive it of due process under the Fifth Amendment.

“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the complaint states.

The company is seeking a permanent injunction blocking enforcement of the designation.

OpenAI manager resigns

A robotics manager at OpenAI said on Saturday that she had resigned over the artificial intelligence giant’s deal with the US government to allow its technology’s deployment for war and domestic surveillance.

The company behind ChatGPT secured a defence contract with the Pentagon last month, hours after rival Anthropic refused to agree to unconditional military use of its technology.

OpenAI’s CEO Sam Altman later posted to X saying it would be modifying a contract so its models would not be used for “domestic surveillance of US persons and nationals,” after criticism it was giving too much power to military officials without oversight.

Caitlin Kalinowski, a manager of the hardware team in the robotics division, posted on X that she cared deeply about “the robotics team and the work we built together”.

However, “surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got”.

“This was about principle, not people,” she said.

Kalinowski wrote in a follow-up post that she took issue with the haste of OpenAI’s Pentagon deal.

“To be clear, my issue is that the announcement was rushed without the guardrails defined,” she wrote.

“It’s a governance concern first and foremost. These are too important for deals or announcements to be rushed.” Kalinowski previously worked at Meta, developing their augmented reality glasses.

Published in Dawn, March 10th, 2026

Opinion

Editorial

Pahalgam aftermath
24 Apr, 2026

Pahalgam aftermath

A YEAR after at least 26 people were killed in a terrorist attack in occupied Kashmir’s Pahalgam area, ties ...
Real estate power
24 Apr, 2026

Real estate power

THE latest round of land valuation revisions by the FBR for tax purposes signifies a familiar pattern that ...
Ad astra
Updated 24 Apr, 2026

Ad astra

AMONG the many developments this month that Pakistanis can take pride in is the news that one of their own will soon...
Ceasefire extension
Updated 23 Apr, 2026

Ceasefire extension

THOUGH the US has extended the Iran ceasefire — thanks largely to effective Pakistani diplomacy to prevent sliding...
Climate & livelihoods
23 Apr, 2026

Climate & livelihoods

THE latest ILO report estimates that around 3.3m jobs may have been affected by the 2025 floods — significantly...
Virtual courts
23 Apr, 2026

Virtual courts

THOUGH routine activities in Islamabad have been greatly hindered amidst security preparations for another round of...