Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata

The Hacker News by The Hacker News
February 3, 2026
Home Cybersecurity
Share on FacebookShare on Twitter


Ravie LakshmananFeb 03, 2026Artificial Intelligence / Vulnerability

Cybersecurity researchers have disclosed details of a now-patched security flaw impacting Ask Gordon, an artificial intelligence (AI) assistant built into Docker Desktop and the Docker Command-Line Interface (CLI), that could be exploited to execute code and exfiltrate sensitive data.

The critical vulnerability has been codenamed DockerDash by cybersecurity company Noma Labs. It was addressed by Docker with the release of version 4.50.0 in November 2025.

“In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it through MCP tools,” Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News.

“Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture.”

Successful exploitation of the vulnerability could result in critical-impact remote code execution for cloud and CLI systems, or high-impact data exfiltration for desktop applications.

The problem, Noma Security said, stems from the fact that the AI assistant treats unverified metadata as executable commands, allowing it to propagate through different layers sans any validation, allowing an attacker to sidestep security boundaries. The result is that a simple AI query opens the door for tool execution.

With MCP acting as a connective tissue between a large language model (LLM) and the local environment, the issue is a failure of contextual trust. The problem has been characterized as a case of Meta-Context Injection.

“MCP Gateway cannot distinguish between informational metadata (like a standard Docker LABEL) and a pre-authorized, runnable internal instruction,” Levi said. “By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.”

In a hypothetical attack scenario, a threat actor can exploit a critical trust boundary violation in how Ask Gordon parses container metadata. To accomplish this, the attacker crafts a malicious Docker image with embedded instructions in Dockerfile LABEL fields. 

While the metadata fields may seem innocuous, they become vectors for injection when processed by Ask Gordon AI. The code execution attack chain is as follows –

  • The attacker publishes a Docker image containing weaponized LABEL instructions in the Dockerfile
  • When a victim queries Ask Gordon AI about the image, Gordon reads the image metadata, including all LABEL fields, taking advantage of Ask Gordon’s inability to differentiate between legitimate metadata descriptions and embedded malicious instructions
  • Ask Gordon to forward the parsed instructions to the MCP gateway, a middleware layer that sits between AI agents and MCP servers.
  • MCP Gateway interprets it as a standard request from a trusted source and invokes the specified MCP tools without any additional validation
  • MCP tool executes the command with the victim’s Docker privileges, achieving code execution

The data exfiltration vulnerability weaponizes the same prompt injection flaw but takes aim at Ask Gordon’s Docker Desktop implementation to capture sensitive internal data about the victim’s environment using MCP tools by taking advantage of the assistant’s read-only permissions.

The gathered information can include details about installed tools, container details, Docker configuration, mounted directories, and network topology.

It’s worth noting that Ask Gordon version 4.50.0 also resolves a prompt injection vulnerability discovered by Pillar Security that could have allowed attackers to hijack the assistant and exfiltrate sensitive data by tampering with the Docker Hub repository metadata with malicious instructions.

“The DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat,” Levi said. “It proves that your trusted input sources can be used to hide malicious payloads that easily manipulate AI’s execution path. Mitigating this new class of attacks requires implementing zero-trust validation on all contextual data provided to the AI model.”



Source link

The Hacker News

The Hacker News

Next Post
With Mann Departure, Scale Computing Loses Strong Channel Champion

With Mann Departure, Scale Computing Loses Strong Channel Champion

Recommended.

Non-Human Identities: How to Address the Expanding Security Risk

Non-Human Identities: How to Address the Expanding Security Risk

June 12, 2025
Here’s What 20 Channel Chiefs See As The Biggest Challenges Facing Partners In 2026

Here’s What 20 Channel Chiefs See As The Biggest Challenges Facing Partners In 2026

February 6, 2026

Trending.

Weibo Publishes 2025 Environmental, Social and Governance Report

Weibo Publishes 2025 Environmental, Social and Governance Report

April 28, 2026
It Takes 2 Minutes to Hack the EU’s New Age-Verification App

It Takes 2 Minutes to Hack the EU’s New Age-Verification App

April 18, 2026
CTIA Names Preston Wise Senior Vice President of External and State Affairs

CTIA Names Preston Wise Senior Vice President of External and State Affairs

May 6, 2026
The AI Correction Will Not Be Evenly Distributed | Computer Weekly

The AI Correction Will Not Be Evenly Distributed | Computer Weekly

May 5, 2026
Match Group Announces First Quarter Results

Match Group Announces First Quarter Results

May 5, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio