InsighthubNews
  • Home
  • World News
  • Politics
  • Celebrity
  • Environment
  • Business
  • Technology
  • Crypto
  • Sports
  • Gaming
Reading: Microsoft discovers ‘whisper leak’ attack that identifies AI chat topics in encrypted traffic
Share
Font ResizerAa
InsighthubNewsInsighthubNews
Search
  • Home
  • World News
  • Politics
  • Celebrity
  • Environment
  • Business
  • Technology
  • Crypto
  • Sports
  • Gaming
© 2024 All Rights Reserved | Powered by Insighthub News
InsighthubNews > Technology > Microsoft discovers ‘whisper leak’ attack that identifies AI chat topics in encrypted traffic
Technology

Microsoft discovers ‘whisper leak’ attack that identifies AI chat topics in encrypted traffic

November 8, 2025 7 Min Read
Share
Microsoft discovers 'whisper leak' attack that identifies AI chat topics in encrypted traffic
SHARE

Microsoft has revealed details of a new side-channel attack targeting remote language models. Under certain circumstances, this attack could allow a passive attacker with the ability to observe network traffic to gather details about a model’s conversation topics despite cryptographic protection.

The company noted that this leakage of data exchanged between humans and language models in streaming mode could pose a significant risk to the privacy of user and corporate communications. This attack has been codenamed whisper leak.

“A cyber attacker in a position to observe encrypted traffic (for example, a nation-state attacker at the Internet service provider layer, someone on a local network, or someone connected to the same Wi-Fi router) could use this cyber attack to infer whether a user’s prompts are about a particular topic,” said security researchers Jonathan Bar Or and Geoff McDonald, and the Microsoft Defender security research team.

In other words, this attack allows an attacker to observe the encrypted TLS traffic between the user and the LLM service, extract packet sizes and timing sequences, and use a trained classifier to infer whether the topic of the conversation matches a category of sensitive interest.

Model streaming for large-scale language models (LLMs) is a technique that allows models to receive incremental data as they produce responses, instead of waiting for the entire output to be computed. This is an important feedback mechanism because certain responses may take time depending on the complexity of the prompt or task.

The latest technology demonstrated by Microsoft is important. In particular, it works despite the fact that the communication with the artificial intelligence (AI) chatbot is encrypted with HTTPS, so the content of the exchange remains secure and cannot be tampered with.

See also  China-linked Plugx and BookWorm Malware Attack Targets Asia Telecom and ASEAN Network

Many side-channel attacks have been devised against LLM in recent years. This includes the ability to infer the length of individual plaintext tokens from the size of encrypted packets in streaming model responses, and the ability to perform input theft (also known as InputSnatch) by exploiting timing differences introduced by caching in LLM inference.

According to Microsoft, Whisper Leak builds on these findings by investigating the possibility that “the sequence of encrypted packet sizes and interarrivals in a streaming language model’s response contains enough information to classify the topic of the initial prompt, even when the response is streamed as a group of tokens.”

To test this hypothesis, the Windows maker said it used three different machine learning models: LightGBM, Bi-LSTM, and BERT to train a binary classifier as a proof of concept that can distinguish between specific topic prompts and the rest (i.e., noise).

As a result, we found that many models from Mistral, xAI, DeepSeek, and OpenAI achieved scores above 98%, allowing an attacker monitoring random conversations with a chatbot to reliably flag that particular topic.

“Government agencies and internet service providers monitoring traffic to popular AI chatbots could reliably identify users asking questions about specific sensitive topics, such as money laundering, political dissent, or other targets, even if all traffic is encrypted,” Microsoft said.

whisper leak attack pipeline

To make matters worse, researchers found that Whisper Leak’s effectiveness increases as attackers collect more training samples over time, potentially turning it into a real threat. Following responsible disclosure, OpenAI, Mistral, Microsoft, and xAI have all introduced mitigations to combat the risks.

See also  Chinese apt deploys egg stream fireless malware to infringe Philippine military systems

“More sophisticated attack models, combined with the richer patterns available in multi-turn conversations and multiple conversations from the same user, mean that cyber attackers with the patience and resources may be able to achieve higher success rates than our initial results suggest.”

One effective countermeasure devised by OpenAI, Microsoft, and Mistral is to add a “random sequence of variable-length text” to each response. This masks the length of each token and invalidates the side channel argument.

Microsoft also recommends that users concerned about privacy when talking to AI providers avoid discussing sensitive topics when using untrusted networks, utilize a VPN as an additional layer of protection, use LLM’s non-streaming model, and switch to providers that have implemented mitigations.

This disclosure covers eight open-weight LLMs: Alibaba (Qwen3-32B), DeepSeek (v3.1), Google (Gemma 3-1B-IT), Meta (Llama 3.3-70B-Instruct), Microsoft (Phi-4), Mistral (Large-2, aka Large-Instruct-2047), OpenAI (GPT-OSS-20b), and Zhipu. This will be done as a new evaluation. We found that the AI ​​(GLM 4.5-Air) is very sensitive to hostile maneuvers, especially when it comes to multi-turn attacks.

Comparative analysis of vulnerabilities showing attack success rates across tested models for both single-turn and multi-turn scenarios

“These results highlight the systematic inability of current open-weight models to maintain safety guardrails over long interactions,” Cisco AI Defense researchers Amy Chan, Nicholas Conley, Harish Santhanalakshmi Ganesan, and Adam Swandha said in an accompanying paper.

“We assess that tuning strategies and laboratory priorities significantly influence resilience. Feature-focused models such as Llama 3.3 and Qwen 3 exhibit higher multiturn sensitivity, while safety-focused designs such as Google Gemma 3 exhibit more balanced performance.”

See also  ASD warns of ongoing BADCANDY attack exploiting Cisco IOS XE vulnerability

These findings demonstrate that organizations adopting open source models can face operational risks without additional security guardrails, and since the public debut of OpenAI ChatGPT in November 2022, a growing body of research has uncovered fundamental security weaknesses in LLM and AI chatbots.

This makes it important for developers to implement appropriate security controls when integrating such features into their workflows, fine-tune open weight models to be more robust against jailbreaks and other attacks, conduct regular AI red team evaluations, and implement rigorous system prompts tailored to defined use cases.

Share This Article
Twitter Copy Link
Previous Article Sonic Rumble Code November 2025 Sonic Rumble Code November 2025
Next Article Trump suggests he won't compromise with Democrats on government shutdown as senators plan unusual weekend session Trump suggests he won’t compromise with Democrats on government shutdown as senators plan unusual weekend session

Latest News

Silver Fox uses fake Microsoft Teams installer to spread ValleyRAT malware in China

Silver Fox uses fake Microsoft Teams installer to spread ValleyRAT malware in China

threat actor known as silver fox In attacks targeting Chinese…

December 4, 2025
Critical RSC bug in React and Next.js allows unauthenticated remote code execution

Critical RSC bug in React and Next.js allows unauthenticated remote code execution

A maximum severity security flaw has been disclosed in React…

December 3, 2025
India orders messaging apps to work only with active SIM cards to prevent fraud and abuse

India orders messaging apps to work only with active SIM cards to prevent fraud and abuse

India's Department of Telecommunications (DoT) has directed app-based telecom service…

December 2, 2025
India orders mobile phone manufacturers to pre-install Sanchar Saathi app to prevent wire fraud

India orders mobile phone manufacturers to pre-install Sanchar Saathi app to prevent wire fraud

India's Ministry of Telecommunications has reportedly asked major mobile device…

December 1, 2025
CISA adds actively exploited XSS bug CVE-2021-26829 in OpenPLC ScadaBR to KEV

CISA adds actively exploited XSS bug CVE-2021-26829 in OpenPLC ScadaBR to KEV

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has updated…

November 30, 2025

You Might Also Like

LastPass warns about fake repositories that infect MacOS with Atomic Infostealer
Technology

LastPass warns about fake repositories that infect MacOS with Atomic Infostealer

2 Min Read
Axios Abuse and Salty2FA Kit Fuel Advanced Microsoft 365 Fishing Attack
Technology

Axios Abuse and Salty2FA Kit Fuel Advanced Microsoft 365 Fishing Attack

7 Min Read
Confucius hackers hit Pakistan with new Wooperstealer and Anonymous malware
Technology

Confucius hackers hit Pakistan with new Wooperstealer and Anonymous malware

3 Min Read
The FBI warns UNC6040 and UNC6395 targeting Salesforce platforms in data theft attacks
Technology

The FBI warns UNC6040 and UNC6395 targeting Salesforce platforms in data theft attacks

5 Min Read
InsighthubNews
InsighthubNews

Welcome to InsighthubNews, your reliable source for the latest updates and in-depth insights from around the globe. We are dedicated to bringing you up-to-the-minute news and analysis on the most pressing issues and developments shaping the world today.

  • Home
  • Celebrity
  • Environment
  • Business
  • Crypto
  • Home
  • World News
  • Politics
  • Celebrity
  • Environment
  • Business
  • Technology
  • Crypto
  • Sports
  • Gaming
  • World News
  • Politics
  • Technology
  • Sports
  • Gaming
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service

© 2024 All Rights Reserved | Powered by Insighthub News

Welcome Back!

Sign in to your account

Lost your password?