InsighthubNews
  • Home
  • World News
  • Politics
  • Celebrity
  • Environment
  • Business
  • Technology
  • Crypto
  • Sports
  • Gaming
Reading: Researchers discover more than 30 flaws in AI coding tools that enable data theft and RCE attacks
Share
Font ResizerAa
InsighthubNewsInsighthubNews
Search
  • Home
  • World News
  • Politics
  • Celebrity
  • Environment
  • Business
  • Technology
  • Crypto
  • Sports
  • Gaming
© 2024 All Rights Reserved | Powered by Insighthub News
InsighthubNews > Technology > Researchers discover more than 30 flaws in AI coding tools that enable data theft and RCE attacks
Technology

Researchers discover more than 30 flaws in AI coding tools that enable data theft and RCE attacks

December 6, 2025 9 Min Read
Share
Researchers discover more than 30 flaws in AI coding tools that enable data theft and RCE attacks
SHARE

More than 30 security vulnerabilities have been uncovered in various artificial intelligence (AI)-powered integrated development environments (IDEs) that combine prompt injection primitives with legitimate functionality to enable data exfiltration and remote code execution.

Security flaws are collectively known as IDE supporter By security researcher Ari Marzouk (MaccariTA). These affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline. Of these, 24 have been assigned a CVE identifier.

“I think the most surprising finding of this study is the fact that multiple universal attack chains affected all AI IDEs tested,” Marzouk told The Hacker News.

“All AI IDEs (and the coding assistants that integrate with them) effectively ignore the underlying threat modeling software (IDE). They treat that functionality as inherently secure because it’s been around for years. But when you add AI agents that can operate autonomously, that same functionality can be weaponized as data leakage or RCE primitives.”

The core of these issues cascades three different vectors that are common to AI-driven IDEs.

  • It bypasses large language model (LLM) guardrails and hijacks the context to do the attacker’s bidding (also known as prompt injection).
  • Perform specific actions without requiring user interaction via automatic approval tool calls in the AI ​​agent
  • Trigger legitimate functionality in the IDE that allows an attacker to breach security boundaries and leak sensitive data or execute arbitrary commands

The highlighted issue differs from previous attack chains that leverage prompt injection in conjunction with vulnerable tools (or abuse legitimate tools to perform read or write actions) to modify the configuration of an AI agent to execute code or perform other unintended behaviors.

What’s notable about IDEsaster is that it requires prompt injection primitives and agent tools and uses them to activate legitimate functionality of the IDE, leading to information disclosure and command execution.

See also  Experts report a surge in automated botnet attacks targeting PHP servers and IoT devices

Context hijacking can be accomplished in a myriad of ways, including through user-added context references, which take the form of pasted URLs, text containing hidden characters that are invisible to the human eye but can be parsed by LLM. Alternatively, the context can be contaminated by using a Model Context Protocol (MCP) server through tool poisoning or lag pulling, or when a legitimate MCP server parses attacker-controlled input from an external source.

Some of the attacks confirmed to be enabled by the new exploit chain include:

  • CVE-2025-49150 (Cursor), CVE-2025-53097 (Roo Code), CVE-2025-58335 (JetBrains Junie), GitHub Copilot (No CVE), Kiro.dev (No CVE), and Claude Code (addressed with security warning) – Using prompt injection to read a sensitive file using a legitimate tool (‘read_file’) or a vulnerable tool (‘search_files’ or ‘search_project’) and write a JSON file through a legitimate tool (‘write_file’ or ‘edit_file’) using a remote JSON schema hosted on an attacker-controlled domain will result in data disclosure when the IDE makes a GET request.
  • CVE-2025-53773 (GitHub Copilot), CVE-2025-54130 (Cursor), CVE-2025-53536 (Roo code), CVE-2025-55012 (Zed.dev), and Claude Code (addressed with security warning) – Use prompt injection to execute code by editing the IDE configuration file (‘.vscode/settings.json’ or ‘.idea/workspace.xml’) and setting ‘php.validate.executablePath’ or ‘PATH_TO_GIT’ to the path of the executable containing the malicious code.
  • CVE-2025-64660 (GitHub Copilot), CVE-2025-61590 (cursor), and CVE-2025-58372 (Roo code) – Use prompt injection to edit workspace configuration files (*.code-workspace) to override multi-root workspace settings and run code.

Note that the last two examples rely on an AI agent configured to auto-approve file writes, which allows an attacker to influence the prompt and write malicious workspace settings. However, this behavior is auto-approved by default for files in the workspace, allowing arbitrary code to execute without requiring user interaction or reopening the workspace.

With a quick injection and jailbreak serving as the first step in the attack chain, Marzouk offers the following recommendations:

  • Use the AI ​​IDE (and AI agent) only with projects and files you trust. Malicious rule files, instructions hidden in source code or other files (READMEs), and even file names can become prompt injection vectors.
  • Connect only to trusted MCP servers and continuously monitor these servers for changes (even trusted servers can be compromised). Review and understand the data flow of MCP tools (e.g., legitimate MCP tools may obtain information from attacker-controlled sources such as GitHub PR)
  • Manually check the added sources (e.g. via URL) to check for hidden instructions (e.g. comments in HTML / hidden text in CSS / hidden Unicode characters).

We recommend that developers of AI agents and AI IDEs apply the principle of least privilege in their LLM tools, minimize prompt injection vectors, harden system prompts, use sandboxing to execute commands, and perform security testing for path traversal, information disclosure, and command injection.

See also  India orders mobile phone manufacturers to pre-install Sanchar Saathi app to prevent wire fraud

This disclosure coincided with the discovery of several vulnerabilities in AI coding tools that could have far-reaching implications.

  • OpenAI Codex CLI command injection flaw (CVE-2025-61260). It takes advantage of the fact that the program implicitly trusts commands configured via the MCP server entry and executes them at startup without asking for user permission. This could allow arbitrary commands to be executed if a malicious attacker is able to tamper with the repository’s “.env” and “./.codex/config.toml” files.
  • Indirectly prompted injection into Google Antigravity using a tainted web source. Gemini can be manipulated to collect credentials and sensitive code from a user’s IDE and can be used to browse malicious sites and extract information using browser subagents.
  • Google Antigravity contains multiple vulnerabilities that could lead to data disclosure or remote command execution via indirect prompt injection, or a persistent backdoor that could leverage a malicious trusted workspace to execute arbitrary code on each future application launch.
  • A new class of vulnerabilities named PromptPwnd uses prompt injection to target AI agents connected to vulnerable GitHub Actions (or GitLab CI/CD pipelines) to run built-in privileged tools that can lead to information disclosure or code execution.

As agent AI tools grow in popularity in enterprise environments, these findings demonstrate how AI tools expand the attack surface of development machines by exploiting LLM’s inability to distinguish between user-provided instructions to complete a task and content that may be ingested from external sources, resulting in potentially embedded malicious prompts.

“Repositories that use AI for issue triage, PR labeling, code suggestions, or automated responses are at risk of prompt injection, command injection, security leaks, repository compromise, and upstream supply chain compromise,” said Aikido researcher Layne Dahlman.

See also  Hackers are actively exploiting the 7-Zip symbolic link-based RCE vulnerability (CVE-2025-11001)

Marzouk also said the findings highlight the importance of “Secure for AI.” This is a new paradigm that researchers have devised to tackle the security challenges posed by AI capabilities, ensuring that products are not only secure by default and secure by design, but also mindful of how AI components can be exploited over time.

“This is another example of why we need the ‘safe for AI’ principle,” Marzouk said. “Connecting an AI agent to an existing application (IDE in my case, GitHub Actions in their case) introduces new risks.”

Share This Article
Twitter Copy Link
Previous Article Ark Raiders wipes are coming soon. Choose Reset and get huge bonuses Ark Raiders wipes are coming soon. Choose Reset and get huge bonuses
Next Article Why is FIFA President Gianni Infantino working so hard to curry favor with President Trump? Why is FIFA President Gianni Infantino working so hard to curry favor with President Trump?

Latest News

Spyware alerts, Mirai Strikes, Docker leaks, ValleyRAT rootkits — 20 more stories

Spyware alerts, Mirai Strikes, Docker leaks, ValleyRAT rootkits — 20 more stories

This week's cyber articles show how quickly the online world…

December 11, 2025
React2Shell exploit delivers crypto miners and new malware across multiple sectors

React2Shell exploit delivers crypto miners and new malware across multiple sectors

React2 shell Threat actors continue to witness large-scale exploitation of…

December 10, 2025
North Korea-linked attackers exploit React2Shell to deploy new EtherRAT malware

North Korea-linked attackers exploit React2Shell to deploy new EtherRAT malware

North Korean-linked attackers may have become the latest attackers to…

December 9, 2025
Experts confirm that JS#SMUGGLER uses compromised sites to deploy NetSupport RAT

Experts confirm that JS#SMUGGLER uses compromised sites to deploy NetSupport RAT

Cybersecurity researchers say, “ JS#Smuggler It has been observed using…

December 8, 2025
React2Shell critical flaw added to CISA KEV after active exploitation

React2Shell critical flaw added to CISA KEV after active exploitation

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Friday…

December 7, 2025

You Might Also Like

Malware Delivery Channels
Technology

North Korean hackers turn JSON service into covert malware delivery channel

3 Min Read
Batshadow Group hunts job seekers using the new GO-based "Vampire Bot" malware
Technology

Batshadow Group hunts job seekers using the new GO-based “Vampire Bot” malware

4 Min Read
Nation-state hackers deploy new Airstalk malware in suspected supply chain attack
Technology

Nation-state hackers deploy new Airstalk malware in suspected supply chain attack

5 Min Read
From Log4j to IIS, Chinese hackers turn legacy bugs into global spying tools
Technology

From Log4j to IIS, Chinese hackers turn legacy bugs into global spying tools

8 Min Read
InsighthubNews
InsighthubNews

Welcome to InsighthubNews, your reliable source for the latest updates and in-depth insights from around the globe. We are dedicated to bringing you up-to-the-minute news and analysis on the most pressing issues and developments shaping the world today.

  • Home
  • Celebrity
  • Environment
  • Business
  • Crypto
  • Home
  • World News
  • Politics
  • Celebrity
  • Environment
  • Business
  • Technology
  • Crypto
  • Sports
  • Gaming
  • World News
  • Politics
  • Technology
  • Sports
  • Gaming
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service

© 2024 All Rights Reserved | Powered by Insighthub News

Welcome Back!

Sign in to your account

Lost your password?