Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Anthropic Denies It Could Sabotage AI Tools During War

By Wired by By Wired
March 21, 2026
Home AI & ML
Share on FacebookShare on Twitter


Anthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on Friday. The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during war.

“Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations,” Thiyagu Ramasamy, Anthropic’s head of public sector, wrote. “Anthropic does not have the access required to disable the technology or alter the model’s behavior before or during ongoing operations.”

The Pentagon has been sparring with the leading AI lab for months over how its technology can be used for national security—and what the limits on that usage should be. This month, Defense Secretary Pete Hegseth labeled Anthropic a supply-chain risk, a designation that will prevent the Department of Defense from using the company’s software, including through contractors, over the coming months. Other federal agencies are also abandoning Claude.

Anthropic filed two lawsuits challenging the constitutionality of the ban and is seeking an emergency order to reverse it. However, customers have already begun canceling deals. A hearing in one of the cases is scheduled for March 24 in federal district court in San Francisco. The judge could decide on a temporary reversal soon after.

In a filing earlier this week, government attorneys wrote that the Department of Defense “is not required to tolerate the risk that critical military systems will be jeopardized at pivotal moments for national defense and active military operations.”

The Pentagon has been using Claude to analyze data, write memos, and help generate battle plans, WIRED reported. The government’s argument is that Anthropic could disrupt active military operations by turning off access to Claude or pushing harmful updates if the company disapproves of certain uses.

Ramasamy rejected that possibility. “Anthropic does not maintain any back door or remote ‘kill switch,’” he wrote. “Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way.”

He went on to say that Anthropic would be able to provide updates only with the approval of the government and its cloud provider, in this case Amazon Web Services, though he didn’t specify it by name. Ramasamy added that Anthropic cannot access the prompts or other data military users enter into Claude.

Anthropic executives maintain in court filings that the company does not want veto power over military tactical decisions. Sarah Heck, head of policy, wrote in a court filing on Friday that Anthropic was willing to guarantee as much in a contract proposed March 4. “For the avoidance of doubt, [Anthropic] understands that this license does not grant or confer any right to control or veto lawful Department of War operational decision‑making,” the proposal stated, according to the filing, which referred to an alternative name for the Pentagon.

The company was also ready to accept language that would address its concerns about Claude being used to help carry out deadly strikes without human supervision, Heck claimed. But negotiations ultimately broke down.

For the time being, the Defense Department has said in court filings that it “is taking additional measures to mitigate the supply chain risk” posed by the company by “working with third-party cloud service providers to ensure Anthropic leadership cannot make unilateral changes” to the Claude systems currently in place.



Source link

Tags: anthropicArtificial Intelligenceclaudepentagonwar
By Wired

By Wired

Recommended.

Panasonic Announces New Firmware Updates for its LUMIX S and G Series Cameras: S5II, S5IIX, GH7, G9II

Panasonic Announces New Firmware Updates for its LUMIX S and G Series Cameras: S5II, S5IIX, GH7, G9II

January 23, 2025
London’s answer to Wall Street gains momentum as major firms sign on

London’s answer to Wall Street gains momentum as major firms sign on

December 5, 2025

Trending.

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

March 10, 2026
Huawei lanceert Next Generation FAN-oplossing

Huawei lanceert Next Generation FAN-oplossing

March 7, 2026
Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

February 26, 2026
Half of Google’s software development now AI-generated | Computer Weekly

Half of Google’s software development now AI-generated | Computer Weekly

February 5, 2026
Huawei uvádí na trh řešení FAN nové generace

Huawei uvádí na trh řešení FAN nové generace

March 6, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio