Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

By Wired by By Wired
March 18, 2026
Home AI & ML
Share on FacebookShare on Twitter


The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic’s First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company’s lawsuit against the government will fail.

“The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion,” US Department of Justice attorneys wrote.

The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon’s decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company’s technologies from being used inside the department. If the designation holds, Anthropic could lose up to billions of dollars in expected revenue this year.

Anthropic wants to resume business as usual until the litigation is resolved. Rita Lin, the judge overseeing the San Francisco case, has scheduled a hearing for next Tuesday to decide whether to honor Anthropic’s request.

Justice Department attorneys, writing for the Department of Defense and other agencies in the Tuesday filing, described Anthropic’s concerns about potentially losing business as “legally insufficient to constitute irreparable injury” and called on Lin to deny the company a reprieve.

The attorneys also wrote that the Trump administration was motivated to act because of “concerns about Anthropic’s potential future conduct if it retained access” to government technology systems. “No one has purported to restrict Anthropic’s expressive activity,” they wrote.

The government argues that Anthropic’s push to limit how the Pentagon can use its AI technology led Defense Secretary Pete Hegseth to “reasonably” determine that “Anthropic staff might sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, or operation of a national security system.”

The Department of Defense and Anthropic have been fighting over potential restrictions on the company’s Claude AI models. Anthropic believes its models shouldn’t be used to facilitate broad surveillance of Americans and are not currently reliable enough to power fully autonomous weapons.

Several legal experts previously told WIRED that Anthropic has a strong argument that the supply-chain measure amounts to illegal retaliation. But courts often favor national security arguments from the government, and Pentagon officials have described Anthropic as a contractor that has gone rogue and that its technologies cannot be trusted.

“In particular, DoW became concerned that allowing Anthropic continued access to DoW’s technical and operational warfighting infrastructure would introduce unacceptable risk into DoW supply chains,” Tuesday’s filing states. “AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic—in its discretion—feels that its corporate ‘red lines’ are being crossed.”

The Defense Department and other federal agencies are working to replace Anthropic’s AI tools with products from competing tech companies in the next few months. One of the military’s top uses of Claude is through Palantir data analysis software, people familiar with the matter have told WIRED.

In Tuesday’s filing, the lawyers argued that the Pentagon “cannot simply flip a switch at a time when Anthropic currently is the only AI model cleared for use” on the department’s’s “classified systems and high-intensity combat operations are underway.” The department is working to deploy AI systems from Google, OpenAI, and xAI as alternatives.

A number of companies and groups, including AI researchers, Microsoft, a federal employee labor union, and former military leaders have filed court briefs in support of Anthropic. None have been filed in support of the government.

Anthropic has until Friday to file a counter response to the government’s arguments.



Source link

Tags: anthropicArtificial IntelligenceDefensedepartment of defensemilitarymilitary tech
By Wired

By Wired

Next Post
CometChat Secures .5M in Strategic Funding from Run Ventures to Expand Next-Generation AI Agent Platform

CometChat Secures $6.5M in Strategic Funding from Run Ventures to Expand Next-Generation AI Agent Platform

Recommended.

TechUK report on datacentre water usage habits criticised | Computer Weekly

TechUK report on datacentre water usage habits criticised | Computer Weekly

September 12, 2025
New VVS Stealer Malware Targets Discord Accounts via Obfuscated Python Code

New VVS Stealer Malware Targets Discord Accounts via Obfuscated Python Code

January 5, 2026

Trending.

Cybercrime Groups Using Vishing and SSO Abuse in Rapid SaaS Extortion Attacks

Cybercrime Groups Using Vishing and SSO Abuse in Rapid SaaS Extortion Attacks

May 1, 2026
Global AI Innovators Welcomed as WAIC Opens Applications for 2026 SAIL Award With 0,000+ Prize Pool

Global AI Innovators Welcomed as WAIC Opens Applications for 2026 SAIL Award With $280,000+ Prize Pool

April 2, 2026
Intel Gives Bullish CPU Outlook With .2B Ireland Fab Deal

Intel Gives Bullish CPU Outlook With $14.2B Ireland Fab Deal

April 1, 2026
Armada to Deliver Sovereign AI at the Edge with Microsoft Azure Local

Armada to Deliver Sovereign AI at the Edge with Microsoft Azure Local

April 1, 2026
UK regulators to probe Microsoft amid AI adoption surge

UK regulators to probe Microsoft amid AI adoption surge

March 31, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio