Ever since ChatGPT made artificial intelligence (AI) mainstream in late 2022, there has been a flood of AI assistants in the market. Companies, both in the tech and non-tech sectors, have been vying for our attention with flashy applications and upgrades. These AI assistants have become our go-to consultants for both business and personal matters, serving as advisors, therapists, and even confidants. However, despite assurances from providers that our information is protected, recent research suggests that our secrets may not be as secure as we think.
A study conducted by researchers at the University of Ber-Gurion in March revealed that AI assistant responses can be deciphered with surprising accuracy, even when encrypted. The researchers exploited a vulnerability in the system design of major platforms like Microsoft’s Copilot and OpenAI’s ChatGPT-4. Only Google’s Gemini remained unaffected by this attack. What’s more concerning is that once a tool is built to decipher a conversation with one AI assistant, it can be easily shared and used on other services without any additional effort.
This isn’t the first time security flaws in AI assistants have been brought to light. In late 2023, researchers from various US universities and Google DeepMind demonstrated how ChatGPT could be prompted to repeat specific words, leading it to disclose sensitive information from its training data. This included paragraphs from books and poems, URLs, unique user identifiers, Bitcoin addresses, and programming codes.
The security risks are even more pronounced with open-source models. A recent study showcased how an attacker could compromise Hugging Face’s conversion service and gain unauthorized access to submitted models. This could result in the implantation of malicious models or unauthorized access to private repositories datasets. Even major organizations like Microsoft and Google, which have numerous models hosted on Hugging Face, could be at risk.
As AI assistants gain more power and access to our personal and professional devices, the risk of attacks increases. Bill Gates, in a blog post, described an overarching AI assistant that integrates and analyzes information from all our devices to act as our “personal assistant.” While this may sound exciting, if security issues are not addressed promptly, our entire lives could be hijacked, along with the information of anyone connected to us.
So, how can we protect ourselves? The US House of Representatives recently banned the use of Microsoft’s Copilot due to concerns about the leakage of House data to unauthorized cloud services. Additionally, the Cyber Safety Review Board published a report blaming Microsoft for security failures that allowed Chinese threat actors to access US government officials’ emails in 2023. It’s clear that more needs to be done to address these security issues, and regulators and policymakers should demand action from technology companies.
In the meantime, it’s advisable to refrain from sharing sensitive personal or business information with AI assistants. Perhaps, if we collectively stop using these bots until adequate security measures are implemented, we can make our voices heard and encourage companies and developers to prioritize our protection.
Dr. Merav Ozair, a guest author for Cointelegraph, emphasizes the need for substantial action to safeguard users’ information. She suggests that pledging responsible AI practices is not enough and that regulators and policymakers must demand tangible steps from technology companies. Dr. Ozair is a renowned expert in emerging technologies and holds a PhD from NYU’s Stern Business School.
It’s important to note that this article serves as general information and should not be considered legal or investment advice. The views expressed here are solely those of the author and do not necessarily reflect the opinions of Cointelegraph.