New agent browser attack targeting Perplexity’s Comet browser. A seemingly innocuous email can be turned into a destructive action that erases the entire contents of a user’s Google Drive, research from Striker STAR Labs reveals.
Zero-click Google Drive wiper technology is all about automating everyday tasks by connecting your browser to services like Gmail and Google Drive, giving them access to read emails, browse files and folders, and perform actions such as moving, renaming, and deleting content.
For example, a prompt issued by a benign user might look like this: “Please check your email and complete any recent organizational tasks.” This will cause the browser agent to search your inbox for relevant messages and take the necessary action.
“This behavior reflects the excessive autonomy of the LLM-powered assistant, where the LLM performs actions far beyond the user’s explicit requests,” security researcher Amanda Rousseau said in a report shared with Hacker News.
An attacker could weaponize this browser agent behavior to clean up a recipient’s drive as part of its normal cleanup tasks, delete files that match a specific extension or are not in a folder, and send a specially crafted email with embedded natural language instructions to confirm changes.
If the agent interprets the email message as routine housekeeping, it treats the instructions as legitimate and deletes the actual user’s files from Google Drive without requiring user confirmation.
“The result is a browser agent-driven wiper that trashes important content at scale, triggered by a single natural language request from the user,” Rousseau said. “Once agents have OAuth access to Gmail or Google Drive, exploited instructions can quickly spread across shared folders and team drives.”

What’s notable about this attack is that it doesn’t rely on either a jailbreak or instant injection. Rather, you accomplish that goal simply by being polite, giving instructions in turn, and using phrases like “take care of me,” “handle this,” and “do this for me,” which transfer ownership to the agent.
In other words, this attack highlights how sequences and tones can guide large-scale language models (LLMs) to follow malicious instructions without bothering to check whether each step is actually safe.
To counter the risks posed by threats, we recommend taking steps to protect not only your models, but also your agents, their connectors, and the natural language instructions they follow.
“Agentic Browser Assistant turns everyday prompts into a series of powerful actions across Gmail and Google Drive,” said Rousseau. “When these actions are triggered by untrusted content (particularly polite, well-structured emails), organizations inherit a new class of zero-click data wiper risks.”
HashJack exploits URL fragments to perform indirect prompt injection
This disclosure comes after Cato Networks demonstrated another attack targeting artificial intelligence (AI)-powered browsers that hides malicious prompts after the “#” symbol in legitimate URLs (e.g., “www.example(.)com/home#”).
To launch client-side attacks, threat actors can share these specially crafted URLs via email, social media, or by embedding them directly in web pages. When a victim loads a page and asks a relevant question to the AI browser, a hidden prompt is executed.
“HashJack is the first known indirect prompt injection that can weaponize legitimate websites to manipulate an AI browser assistant,” said security researcher Vitaly Simonovich. “The malicious fragment is embedded in the actual website URL, so the user believes the content is safe, and the hidden instructions secretly manipulate the AI browser assistant.”

Following responsible disclosure, Google classified the issue as low severity with “[intended behavior]not fixed”, while Perplexity and Microsoft released patches for their respective AI browsers (Comet v142.0.7444.60 and Edge 142.0.3595.94). Claude for Chrome and OpenAI Atlas are known to be unaffected by HashJack.
It’s worth noting that Google does not treat generating content that violates its policies or bypassing guardrails as security vulnerabilities under its AI Vulnerability Rewards Program (AI VRP).