Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out

By Wired by By Wired
September 30, 2025
Home AI & ML
Share on FacebookShare on Twitter


Anthropic is prepared to repurpose conversations users have with its Claude chatbot as training data for its large language models—unless those users opt out.

Previously, the company did not train its generative AI models on user chats. When Anthropic’s privacy policy updates on October 8 to start allowing for this, users will have to opt out, or else their new chat logs and coding tasks will be used to train future Anthropic models.

Why the switch-up? “All large language models, like Claude, are trained using large amounts of data,” reads part of Anthropic’s blog explaining why the company made this policy change. “Data from real-world interactions provide valuable insights on which responses are most useful and accurate for users.” With more user data thrown into the LLM blender, Anthropic’s developers hope to make a better version of their chatbot over time.

The change was originally scheduled to take place on September 28 before being bumped back. “We wanted to give users more time to review this choice and ensure we have a smooth technical transition,” Gabby Curtis, a spokesperson for Anthropic, wrote in an email to WIRED.

How to Opt Out

New users are asked to make a decision about their chat data during their sign-up process. Existing Claude users may have already encountered a pop-up laying out the changes to Anthropic’s terms.

“Allow the use of your chats and coding sessions to train and improve Anthropic AI models,” it reads. The toggle to provide your data to Anthropic to train Claude is automatically on, so users who chose to accept the updates without clicking that toggle are opted into the new training policy.

All users can toggle conversation training on or off under the Privacy Settings. Under the setting that’s labeled Help improve Claude, make sure the switch is turned off and to the left if you’d rather not have your Claude chats train Anthropic’s new models.

If a user doesn’t opt out of model training, then the changed training policy covers all new and revisited chats. That means Anthropic is not automatically training its next model on your entire chat history, unless you go back into the archives and reignite an old thread. After the interaction, that old chat is now reopened and fair game for future training.

The new privacy policy also arrives with an expansion to Anthropic’s data retention policies. Anthropic increased the amount of time it holds onto user data from 30 days in most situations to a much more extensive five years, whether or not users allow model training on their conversations.

Anthropic’s change in terms applies to commercial-tier users, free as well as paid. Commercial users, like those licensed through government or educational plans, are not impacted by the change and conversations from those users will not be used as part of the company’s model training.

Claude is a favorite AI tool for some software developers who’ve latched onto its abilities as a coding assistant. Since the privacy policy update includes coding projects as well as chat logs, Anthropic could gather a sizable amount of coding information for training purposes with this switch.

Prior to Anthropic updating its privacy policy, Claude was one of the only major chatbots not to use conversations for LLM training automatically. In comparison, the default setting for both OpenAI’s ChatGPT and Google’s Gemini for personal accounts include the possibility for model training, unless the user chooses to opt out.

Check out WIRED’s full guide to AI training opt-outs for more services where you can request generative AI not be trained on user data. While choosing to opt out of data training is a boon for personal privacy, especially when dealing with chatbot conversations or other one-on-one interactions, it’s worth keeping in mind that anything you post publicly online, from social media posts to restaurant reviews, will likely be scraped by some startup as training material for its next giant AI model.



Source link

Tags: algorithmsanthropicArtificial Intelligencechatbotsmachine learningprivacy
By Wired

By Wired

Next Post
Bank of America builds GenAI assistant for instant answers for customers | Computer Weekly

Bank of America builds GenAI assistant for instant answers for customers | Computer Weekly

Recommended.

Ransomware Gangs Exploit Unpatched SimpleHelp Flaws to Target Victims with Double Extortion

Ransomware Gangs Exploit Unpatched SimpleHelp Flaws to Target Victims with Double Extortion

June 13, 2025
China-Linked UAT-7290 Targets Telecoms with Linux Malware and ORB Nodes

China-Linked UAT-7290 Targets Telecoms with Linux Malware and ORB Nodes

January 8, 2026

Trending.

Spirit of openness helps banks get serious about stopping scams | Computer Weekly

Spirit of openness helps banks get serious about stopping scams | Computer Weekly

April 10, 2025
Microsoft Q3 Earnings Preview: What To Watch On Azure, Copilot, OpenAI

Microsoft Q3 Earnings Preview: What To Watch On Azure, Copilot, OpenAI

April 29, 2026
Weibo Publishes 2025 Environmental, Social and Governance Report

Weibo Publishes 2025 Environmental, Social and Governance Report

April 28, 2026
It Takes 2 Minutes to Hack the EU’s New Age-Verification App

It Takes 2 Minutes to Hack the EU’s New Age-Verification App

April 18, 2026
Chunghwa Telecom 2025 Form 20-F filed with the U.S. SEC

Chunghwa Telecom 2025 Form 20-F filed with the U.S. SEC

April 15, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio