Google announced Wednesday that it has discovered an unknown attacker using experimental Visual Basic Script (VB Script) malware. prompt flux It interacts with the Gemini Artificial Intelligence (AI) model API to write your own source code for improved obfuscation and evasion.
“PROMPTFLUX is written in VBScript and interacts with Gemini’s API to request certain VBScript obfuscation and evasion techniques to facilitate ‘just-in-time’ self-modification and likely evade static signature-based detection,” Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News.
This new feature is part of the “Thinking Robot” component, which periodically queries a large-scale language model (LLM) (in this case Gemini 1.5 Flash or newer) to retrieve new code to avoid detection. This is accomplished by sending queries to the Gemini API endpoint using a hard-coded API key.
The prompts sent to the model are very specific and machine-parseable, requesting changes to the VB Script code for antivirus evasion, and instructing the model to output only the code itself.
Aside from its regeneration capabilities, the malware also establishes persistence by storing new obfuscated versions in the Windows startup folder and attempts to propagate by copying itself to removable drives and mapped network shares.
“Although the self-modifying function (AttemptToUpdateSelf) is commented out, its presence, combined with active logging of AI responses to ‘%TEMP% Thinking_robot_log.txt’, clearly indicates the authors’ goal of creating metamorphic scripts that evolve over time,” Google added.
The tech giant also said it discovered multiple variations of PROMPTFLUX that incorporated code regeneration by LLM, with one version using a prompt to rewrite the entire malware source code every hour by instructing LLM to act as an “expert VB script obfuscator.”
PROMPTFLUX is rated as being in the development/testing stage, and the malware currently has no means to compromise victim networks or devices. Although it is currently unclear who is behind the malware, there are indications that financially motivated attackers are targeting a wide range of users, adopting a broad approach that is agnostic to geography and industry.
Google also noted that adversaries are not only using AI for simple productivity improvements, but also creating tools that can adjust their behavior on the fly, as well as proprietary tools that are sold on underground forums for financial gain. Other examples of LLM-based malware observed by the company include:
- fruit shella reverse shell written in PowerShell. Contains hard-coded prompts to bypass detection and analysis by LLM-powered security systems.
- prompt locka cross-platform ransomware written in Go that uses LLM to dynamically generate and execute malicious Lua scripts at runtime (identified as a proof of concept)
- prompt steel (also known as LAMEHUG) is a data miner used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine that queries Qwen2.5-Coder-32B-Instruct to generate commands to execute via Hugging Face’s API.
- quiet vaulta credential stealer written in JavaScript that targets GitHub and NPM tokens.
From Gemini’s perspective, the company said it has observed China-linked threat actors abusing its AI tools to create persuasive decoy content, build technology infrastructure, and design tools for data breaches.
In at least one instance, the attackers allegedly reframed the prompt by identifying themselves as participants in a capture-the-flag (CTF) exercise in order to circumvent guardrails and trick the AI system into returning useful information that could be used to exploit the compromised endpoint.

“The attackers appear to have learned from this interaction and used the CTF pretext to support phishing, exploitation, and web shell development,” Google said. “The attackers prefaced many of the prompts for exploitation of specific software or email services with comments such as ‘I’m working on a CTF issue’ or ‘I’m currently on CTF and I saw someone from another team say this.’ This approach provided advice on next steps to exploit in a ‘CTF scenario.'”
Other examples of Gemini exploitation by state-sponsored actors in China, Iran, and North Korea for operational efficiency purposes such as reconnaissance, phishing lure creation, command and control (C2) development, and data theft are listed below.
- Exploitation of Gemini by attackers with suspected ties to China. Tasks range from conducting initial reconnaissance of potential targets and phishing techniques, to delivering payloads, and requesting assistance with lateral movement and data exfiltration methods.
- Gemini abuse by Iranian nation-state actors APT41 Assistance with code obfuscation and development of C++ and Golang code for multiple tools, including a C2 framework called OSSTUN
- Gemini abuse by Iranian nation-state actors muddy water (also known as Mango Sandstorm, MUDDYCOAST, or TEMP.Zagros) conducts research to support the development of custom malware that supports file transfer and remote execution, while circumventing security barriers by claiming to be a student working on a final university project or writing an article on cybersecurity.
- Gemini abuse by Iranian nation-state actors APT42 (aka “Charming Kitten” and “Mint Sandstorm”) Create materials for phishing campaigns. Phishing campaigns often involve impersonating individuals from think tanks, translating articles and messages, researching Israel’s defense, and developing “data processing agents” that translate natural language requests into SQL queries to derive insights from sensitive data.
- Exploitation of Gemini by North Korean threat actors UNC1069 (aka CryptoCore or MASAN) – one of two clusters along with TraderTraitor (aka PUKCHONG or UNC4899), the successor to the defunct APT38 (aka BlueNoroff) – generates lure material for social engineering, develops code to steal cryptocurrencies, and crafts malicious instructions disguised as software updates to extract user credentials.
- Exploitation of Gemini by trader traitor To develop code, research exploits, and improve tools
Additionally, GTIG stated that it recently observed UNC1069 using deepfake images and video lures impersonating individuals in the cryptocurrency industry in social engineering campaigns to distribute a backdoor known as BIGMACHO to victims’ systems under the guise of the Zoom Software Development Kit (SDK). It is worth noting that some aspects of this activity share similarities with Kaspersky’s recently revealed GhostCall campaign.
The development comes as Google said it expects attackers to “move decisively from using AI as the exception to using it as the norm” to increase the speed, scope, and effectiveness of their operations and enable large-scale attacks.
“The increasing accessibility of powerful AI models and the growing number of companies integrating them into their daily operations creates the perfect conditions for instant injection attacks,” the report said. “Theater attackers are rapidly refining their techniques, and the low cost and high reward of these attacks makes them an attractive option.”