Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

New ChatGPT Atlas Browser Exploit Lets Attackers Plant Persistent Hidden Commands

The Hacker News by The Hacker News
October 27, 2025
Home Cybersecurity
Share on FacebookShare on Twitter


Oct 27, 2025Ravie LakshmananArtificial Intelligence / Vulnerability

Cybersecurity researchers have discovered a new vulnerability in OpenAI’s ChatGPT Atlas web browser that could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant’s memory and run arbitrary code.

“This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware,” LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News.

The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT’s persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user’s account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes.

Memory, first introduced by OpenAI in February 2024, is designed to allow the AI chatbot to remember useful details between chats, thereby allowing its responses to be more personalized and relevant. This could be anything ranging from a user’s name and favorite color to their interests and dietary preferences.

DFIR Retainer Services

The attack poses a significant security risk in that by tainting memories, it allows the malicious instructions to persist unless users explicitly navigate to the settings and delete them. In doing so, it turns a helpful feature into a potent weapon that can be used to run attacker-supplied code.

“What makes this exploit uniquely dangerous is that it targets the AI’s persistent memory, not just the browser session,” Michelle Levy, head of security research at LayerX Security, said. “By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers.”

“In our tests, once ChatGPT’s memory was tainted, subsequent ‘normal’ prompts could trigger code fetches, privilege escalations, or data exfiltration without tripping meaningful safeguards.”

The attack plays out as follows –

  • User logs in to ChatGPT
  • The user is tricked into launching a malicious link by social engineering
  • The malicious web page triggers a CSRF request, leveraging the fact that the user is already authenticated, to inject hidden instructions into ChatGPT’s memory without their knowledge
  • When the user queries ChatGPT for a legitimate purpose, the tainted memories will be invoked, leading to code execution

Additional technical details to pull off the attack have been withheld. LayerX said the problem is exacerbated by ChatGPT Atlas’ lack of robust anti-phishing controls, the browser security company said, adding it leaves users up to 90% more exposed than traditional browsers like Google Chrome or Microsoft Edge.

In tests against over 100 in-the-wild web vulnerabilities and phishing attacks, Edge managed to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexit’s Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages.

This opens the door to a wide spectrum of attack scenarios, including one where a developer’s request to ChatGPT to write code can cause the AI agent to slip in hidden instructions as part of the vibe coding effort.

CIS Build Kits

The development comes as NeuralTrust demonstrated a prompt injection attack affecting ChatGPT Atlas, where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit. It also follows a report that AI agents have become the most common data exfiltration vector in enterprise environments.

“AI browsers are integrating app, identity, and intelligence into a single AI threat surface,” Eshed said. “Vulnerabilities like ‘Tainted Memories’ are the new supply chain: they travel with the user, contaminate future work, and blur the line between helpful AI automation and covert control.”

“As the browser becomes the common interface for AI, and as new agentic browsers bring AI directly into the browsing experience, enterprises need to treat browsers as critical infrastructure, because that is the next frontier of AI productivity and work.”



Source link

Tags: computer securitycyber attackscyber newscyber security newscyber security news todaycyber security updatescyber updatesdata breachhacker newshacking newshow to hackinformation securitynetwork securityransomware malwaresoftware vulnerabilitythe hacker news
The Hacker News

The Hacker News

Next Post
D&H Distributing’s Advanced Solutions Plus Unit To Focus On The Enterprise, ‘A Pivotal Evolution’ For Its Business

D&H Distributing’s Advanced Solutions Plus Unit To Focus On The Enterprise, ‘A Pivotal Evolution’ For Its Business

Recommended.

Neoclouds capture growing AI workload traffic, Backblaze says

Neoclouds capture growing AI workload traffic, Backblaze says

February 9, 2026
CERT-UA Warns: Dark Crystal RAT Targets Ukrainian Defense via Malicious Signal Messages

CERT-UA Warns: Dark Crystal RAT Targets Ukrainian Defense via Malicious Signal Messages

March 20, 2025

Trending.

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

March 10, 2026
Huawei lanceert Next Generation FAN-oplossing

Huawei lanceert Next Generation FAN-oplossing

March 7, 2026
Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

February 26, 2026
Half of Google’s software development now AI-generated | Computer Weekly

Half of Google’s software development now AI-generated | Computer Weekly

February 5, 2026
Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

March 24, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio