Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

New ‘Rules File Backdoor’ Attack Lets Hackers Inject Malicious Code via AI Code Editors

The Hacker News by The Hacker News
March 18, 2025
Home Cybersecurity
Share on FacebookShare on Twitter


Mar 18, 2025Ravie LakshmananAI Security / Software Security

Cybersecurity researchers have disclosed details of a new supply chain attack vector dubbed Rules File Backdoor that affects artificial intelligence (AI)-powered code editors like GitHub Copilot and Cursor, causing them to inject malicious code.

“This technique enables hackers to silently compromise AI-generated code by injecting hidden malicious instructions into seemingly innocent configuration files used by Cursor and GitHub Copilot,” Pillar security’s Co-Founder and CTO Ziv Karliner said in a technical report shared with The Hacker News.

Cybersecurity

“By exploiting hidden unicode characters and sophisticated evasion techniques in the model facing instruction payload, threat actors can manipulate the AI to insert malicious code that bypasses typical code reviews.”

The attack vector is notable for the fact that it allows malicious code to silently propagate across projects, posing a supply chain risk.

Malicious Code via AI Code Editors

The crux of the attack hinges on the rules files that are used by AI agents to guide their behavior, helping users to define best coding practices and project architecture.

Specifically, it involves embedding carefully crafted prompts within seemingly benign rule files, causing the AI tool to generate code containing security vulnerabilities or backdoors. In other words, the poisoned rules nudge the AI into producing nefarious code.

This can be accomplished by using zero-width joiners, bidirectional text markers, and other invisible characters to conceal malicious instructions and exploiting the AI’s ability to interpret natural language to generate vulnerable code via semantic patterns that trick the model into overriding ethical and safety constraints.

Cybersecurity

Following responsible disclosure in late February and March 2024, both Cursor and GiHub have stated that users are responsible for reviewing and accepting suggestions generated by the tools.

“‘Rules File Backdoor’ represents a significant risk by weaponizing the AI itself as an attack vector, effectively turning the developer’s most trusted assistant into an unwitting accomplice, potentially affecting millions of end users through compromised software,” Karliner said.

“Once a poisoned rule file is incorporated into a project repository, it affects all future code-generation sessions by team members. Furthermore, the malicious instructions often survive project forking, creating a vector for supply chain attacks that can affect downstream dependencies and end users.”

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.





Source link

Tags: computer securitycyber attackscyber newscyber security newscyber security news todaycyber security updatescyber updatesdata breachhacker newshacking newshow to hackinformation securitynetwork securityransomware malwaresoftware vulnerabilitythe hacker news
The Hacker News

The Hacker News

Next Post
The most widely followed investor survey showed a huge ‘crash’ in bullish sentiment for stocks

The most widely followed investor survey showed a huge 'crash' in bullish sentiment for stocks

Recommended.

Dutch voters grasp digital urgency better than their politicians | Computer Weekly

Dutch voters grasp digital urgency better than their politicians | Computer Weekly

November 21, 2025
Interview: Vivek Bharadwaj, CIO, Happy Socks | Computer Weekly

Interview: Vivek Bharadwaj, CIO, Happy Socks | Computer Weekly

July 8, 2025

Trending.

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

March 10, 2026
Huawei lanceert Next Generation FAN-oplossing

Huawei lanceert Next Generation FAN-oplossing

March 7, 2026
Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

February 26, 2026
Half of Google’s software development now AI-generated | Computer Weekly

Half of Google’s software development now AI-generated | Computer Weekly

February 5, 2026
Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

March 24, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio