AI tech firm Anthropic sues over blacklisting by Pentagon
Anthropic Files Lawsuits Against Pentagon Over AI Blacklisting
Anthropic, the developer of AI chatbot Claude, has launched legal action against the U.S. Department of Defense following its designation as a “supply chain risk” under national security concerns. The firm alleges the decision was “unprecedented and unlawful,” marking a significant confrontation with the Trump administration.
The Pentagon classified Anthropic as a “supply chain risk” on Thursday, citing its refusal to permit unrestricted military applications of its technology. This move has sparked a public debate about the potential use of Claude in warfare, with Anthropic emphasizing its stance against mass surveillance and autonomous weapons systems.
Legal Challenges Target Pentagon’s Actions
In response, Anthropic filed two separate lawsuits on Monday. One was submitted in California federal court, while the other targeted the federal appeals court in Washington D.C., each addressing distinct facets of the Pentagon’s measures. “These actions are unprecedented and unlawful,” the lawsuit states, arguing that the government oversteps its constitutional authority.
“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorises the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”
The Defense Department declined to comment, adhering to its policy of not responding to ongoing litigation. Anthropic’s financial backers include Alphabet’s Google and Amazon, which have supported the company’s efforts to limit technology’s role in surveillance and weapon systems.
Trump’s Threats and Pentagon’s Response
Defense Secretary Pete Hegseth had previously threatened to penalize Anthropic for not accepting “all lawful uses” of Claude. Meanwhile, Donald Trump pledged to direct federal agencies to cease using the AI assistant, granting the Pentagon six months to implement the ban. Despite this timeline, Claude is deeply integrated into classified systems, including those deployed in the Iran conflict.
By labeling Anthropic a supply chain risk, the Pentagon aims to restrict its defense work through mechanisms intended to shield national security systems from foreign threats. This is the first known case where the federal government has applied such a designation to a U.S.-based company.
Financial Stakes and Industry Context
Anthropic, valued at $380 billion, has argued that the Trump administration’s penalty is narrowly targeted, affecting only military contractors using Claude for defense purposes. A significant portion of its projected $14 billion in annual revenue stems from businesses and government agencies that rely on the AI for coding tasks and other functions.
Earlier this year, the Defense Department inked agreements worth up to $200 million with major AI labs, including Anthropic, OpenAI, and Google. Microsoft-backed OpenAI reached a deal with the U.S. military shortly after Hegseth initiated the blacklisting of Anthropic.
Read more from Sky News: Trump’s furious response to Anthropic, Anthropic’s model is scaring lawyers, AI willing to ‘go nuclear’ in wargames
Listen to The World with Richard Engel and Yalda Hakim every Wednesday.
Be the first to get Breaking News — install the Sky News app for free.
