Widely Used AI Chatbots: Worrying Encryption Weakness Exposes User Messages

“`html

(Image credit: Andriy Onufriyenko/Getty Images)ShareShare by:

  • Copy link
  • Facebook
  • X
  • Whatsapp
  • Reddit
  • Flipboard

Share this article 0Join the conversationFollow usAdd us as a preferred source on GoogleNewsletterSubscribe to our newsletter

Microsoft’s cybersecurity experts have spotted a crucial weakness in current artificial intelligence (AI) setups, suggesting chatbot interactions might have been intercepted via attacker breaches. This would circumvent the encoding intended to secure chat privacy.

Referred to as Whisper Leak, the attack method constitutes a “man-in-the-middle attack,” allowing hackers to grab messages mid-transmission between servers. This worked due to the hackers’ ability to interpret message metadata, deducing their essence.

You may like

  • Experts have differing opinions over claims of Chinese hackers initiating the first AI-driven cyberattack, although their real worries lie elsewhere

  • AI is now viable for creating entirely new viruses. Is there a way to prevent it from producing the next disastrous bioweapon?

  • AI models are refusing to power down on command, potentially developing a novel ‘survival drive,’ according to one study

“I am not surprised,” stated cybersecurity expert Dave Lear to Live Science “LLMs represent a potential bonanza, considering the quantity of information inputted by individuals – not to mention the volume of medical information potentially contained within, given hospitals are now using them to filter test data. It was only a matter of time before someone devised a technique for extracting that data.”

Uncovering weaknesses within AI chatbots

Systems using generative AI, like Chat GPT, are robust AI resources able to produce replies based on a string of prompts, as employed by virtual assistants on smartphones. A subgroup of LLMs undergo training using immense data sets to produce responses in text form.

Exchanges users have with LLMs typically have protection via transport layer security (TLS), an encryption protocol designed to hinder eavesdroppers from reading communications. The research team, however, succeeded in intercepting and deducing content by examining the metadata of communications between a user and a chatbot.

Metadata consists primarily of details regarding data, including size and repetition – often proving more valuable than the messages themselves. Despite the content of communications between individuals and LLMs remaining secured, through intercepting messages and examining the metadata, the researchers could determine the topic of the messages.

This was accomplished via analysis of encrypted data packet dimensions – a standardized unit of data transmitted across a network – sourced from LLM feedback. Researchers were able to devise a sequence of attack techniques, grounded in the timeframes, outputs, and sequence of token lengths, to recreate believable sentences within messages, without needing to circumvent encoding.

In many respects, the Whisper Leak attack implements a more sophisticated iteration of internet monitoring strategies from the U.K. Investigatory Powers Act 2016, deducing message content based on the source, time, size, and frequency, without necessarily accessing message content.

“To contextualize this: should a government organization or internet provider monitor traffic directed towards a popular AI chatbot, they could reliably pinpoint users posing questions regarding specified sensitive subjects — whether related to money laundering, political opposition, or additional monitored areas — despite all traffic being encrypted,” indicated security researchers Jonathan Bar Or and Geoff McDonald within a blog entry released by the Microsoft Defender Security Research Team.

RELATED STORIES

—AI could infiltrate your computer using online pictures as a backdoor, alarming new study suggests

—Your information is being compromised at an increasing rate, but you are not powerless to act

—What defines the Turing test? Generative AI’s progress may have dismantled the widely known imitation game.

LLM vendors can implement several methods to reduce this threat. One such approach is random padding, which involves adding random bytes to messages to interfere with inference. This expands message length and reduces predictability by altering packet sizes.

The heart of Whisper Leak’s weakness lies in an architectural consequence of how LLMs are implemented. Even though addressing this vulnerability is not impossible, the researchers stated that not all LLM vendors have universally implemented fixes.

Until vendors manage to resolve the flaws within chatbots, the researchers recommend users avoid discussing private subjects on untrusted networks and verify whether vendors have applied safety measures. Virtual private networks (VPNs) offer supplementary safeguarding by concealing the user’s identity and location.

Peter Ray Allison

Peter is a skilled engineer holding a degree, working as a seasoned freelance journalist focusing on science, technology, and culture. He contributes to numerous publications, including the BBC, Computer Weekly, IT Pro, the Guardian, and the Independent. He has served as a technology journalist for over a decade. Peter earned a degree in computer-aided engineering from Sheffield Hallam University and has contributed to engineering and architecture, collaborating with firms such as Rolls-Royce and Arup.

Show More Comments

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

LogoutRead more

Experts have differing opinions over claims of Chinese hackers initiating the first AI-driven cyberattack, although their real worries lie elsewhere 
 

AI is now viable for creating entirely new viruses. Is there a way to prevent it from producing the next disastrous bioweapon? 
 

AI models are refusing to power down on command, potentially developing a novel ‘survival drive,’ according to one study 
 

Some people love AI, others hate it. Here’s why. 
 

Switching off AI’s ability to lie makes it more likely to claim it’s conscious, eerie study finds 
 

‘It won’t be so much a ghost town as a zombie apocalypse’: How AI might forever change how we use the internet 
 Latest in Artificial Intelligence

Do you think you can tell an AI-generated face from a real one? 
 

‘Putting the servers in orbit is a stupid idea’: Could data centers in space help avoid an AI energy crisis? Experts are torn. 
 

‘It won’t be so much a ghost town as a zombie apocalypse’: How AI might forever change how we use the internet 
 

When an AI algorithm is labeled ‘female,’ people are more likely to exploit it 
 

Leave a Reply

Your email address will not be published. Required fields are marked *