>
Condition 1984: Flock AI Cameras Are Surveilling Nearly Every American Across the US
Anthropic Sues Trump Administration Over Pentagon Blacklist
GAMECHANGER BIGGER THAN TESLA ROBOTAXI ? AI Bombshell
Gulf Nation Launches First Direct Attack on Iran
Musk Whips Out 'Macrohard' In Disruptive Tesla-xAI Bid To Shaft Software Companies
This Bonkers Folding X-Plane Is One Step Closer to Hitting the Skies
Smart 2-in-1 digital microscope goes desktop or handheld as needed
Human Brain Cells Merge With Silica To Play DOOM
Will Yann LeCun Provide The Next Breakthrough In AI?
Human Brain Cells Merge With Silica To Play DOOM
Solar And Storage Could Reshape Rural Electricity Markets
With World Seemingly At War, DARPA Finds Time To Unveil The X-76
The world's first diesel plug-in hybrid pickup truck is here

Anthropic says the government is retaliating against it for maintaining two safety guardrails for Claude, the company's suite of large language models (LLMs) and AI assistants. One prohibits fully autonomous lethal warfare; the other bars mass surveillance of Americans. Anthropic asserts that the government is punishing the company for its protected speech. The Pentagon, however, insists that military use of the technology should not be constrained by a private company's policies. Trump demanded that every federal agency in the US government cease the us of all Anthropic technology.
Anthropic, a San Francisco-based artificial intelligence company, has taken its fight with the Trump administration to federal court. The company was recently designated a "Supply-Chain Risk to National Security" by the Department of Defense (DOD) after refusing to loosen restrictions on how its AI models can be used.
Anthropic says the government is retaliating against it for maintaining two safety guardrails. One prohibits fully autonomous lethal warfare; the other bars mass surveillance of Americans.
The lawsuit (pdf), filed Monday in the U.S. District Court for the Northern District of California, targets the DOD and Defense Secretary Pete Hegseth along with numerous federal agencies and officials. Anthropic is asking the court to block the blacklist and declare the government's actions unlawful.
The Clash Over AI Use
The complaint portrays the dispute as a breakdown in negotiations between the Pentagon and one of its most important AI partners.
Anthropic says it had worked closely with the military and that its flagship model, Claude, had already become deeply embedded in defense systems. According to the lawsuit, Claude is "reportedly the Department's most widely deployed and used frontier AI model, and the only frontier AI model on the Department's classified systems."
The relationship unraveled during negotiations over how the technology could be used.
The Pentagon demanded what it called "all lawful use" of Anthropic's AI models across defense operations. Anthropic says it agreed broadly but refused to remove two restrictions it considers essential:
Anthropic's Usage Policy has always conveyed its view that Claude should not be used for two specific applications: (1) lethal autonomous warfare and (2) surveillance of Americans en masse.
The company argues that those limits reflect the real capabilities and risks of current AI systems:
Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare.