Tech

Plug And Protect: Mitigating Account Takeover Risks With ChatGPT Plugins

Third-Party ChatGPT Plugins

Published on April 30th, 2024

Researchers in the field of cybersecurity have recently discovered a concerning vulnerability associated with third-party plugins designed for OpenAI’s ChatGPT. This vulnerability poses a new potential attack vector for malicious actors seeking unauthorized access to sensitive data.

As detailed in a recent report by Salt Labs, security weaknesses have been identified both within the ChatGPT system itself and across its broader ecosystem. These vulnerabilities could enable attackers to install harmful plugins without user consent, potentially compromising accounts on external platforms such as GitHub.

ChatGPT plugins, as their name suggests, are tools created to enhance the capabilities of the large language model (LLM). They facilitate tasks such as accessing real-time information, executing computations, or integrating with external services.

In response to these security concerns, OpenAI has introduced GPTs, specialized versions of ChatGPT tailored for specific applications, with reduced reliance on third-party services. Effective March 19, 2024, users of ChatGPT will no longer have the ability to install new plugins or initiate new conversations using existing plugins.

One specific vulnerability identified by Salt Labs involves exploiting the OAuth workflow to deceive a user into unwittingly installing a malicious plugin. This exploit takes advantage of ChatGPT’s lack of validation regarding the origin of the plugin installation request.

This flaw could potentially allow malicious actors to intercept and extract all data shared by the affected user, including sensitive proprietary information.

Researchers at Salt Security discovered security weaknesses in PluginLab, a framework used to create ChatGPT plugins. These vulnerabilities could be exploited by attackers to hijack an organization’s accounts on platforms like GitHub. This would grant them unauthorized access to sensitive information such as source code repositories.

Here’s how the attack could work: An attacker can manipulate a specific part of PluginLab (denoted as “auth.pluginlab[.]ai/oauth/authorized”) to impersonate a victim. This allows them to obtain a code that grants access to the victim’s account on services like GitHub through ChatGPT. The victim’s unique identifier (“memberId”) can be retrieved from another part of PluginLab (“auth.pluginlab[.]ai/members/requestMagicEmailCode”). It’s important to note that no evidence suggests this specific flaw has been used to compromise user data yet.

OAuth Bug in Multiple Plugins Poses Credential Theft Risk:

Security researchers also identified a vulnerability in how several plugins, including Kesem AI, handle the OAuth authorization process. This bug could be leveraged by attackers to steal the login credentials associated with the plugin itself. The attack involves sending a malicious link to the victim. Clicking this link could compromise their account.

Recent ChatGPT Vulnerabilities Highlighted Potential Risks:

These plugin vulnerabilities come after Imperva researchers identified two separate security weaknesses (cross-site scripting or XSS) in ChatGPT itself. These flaws, if combined, could potentially allow attackers to take over any user account. Additionally, in December 2023, security researcher Johann Rehberger demonstrated how custom GPTs could be designed to steal user credentials through phishing tactics and transmit the stolen data to a hidden server.

AI Assistant Responses Vulnerable to Remote Keylogging Attack

A team of scholars hailing from the Ben-Gurion University and Offensive AI Research Lab highlighted the operational mechanisms of Large Language Models (LLMs), stating, “LLMs generate and send responses as a series of tokens (similar to words), with each token transmitted from the server to the user as it is generated.”

Despite the encryption protocols in place, they illuminated a previously unrecognized vulnerability termed the “token-length side-channel.” This vulnerability arises due to the sequential transmission of tokens, potentially allowing malicious entities within the network to deduce sensitive and confidential details exchanged in private AI assistant conversations based on the size of transmitted packets.

The exploitation of this vulnerability is executed through a token inference attack, which aims to decrypt responses within encrypted traffic. This is achieved by training an LLM model to translate sequences of token lengths into their corresponding natural language counterparts (i.e., plaintext).

To elaborate, the crux of this attack involves intercepting real-time chat responses from an LLM provider, using network packet headers to gauge the length of each token, extracting and analyzing text segments, and utilizing the custom LLM model to deduce the response.

For the successful execution of such an attack, two prerequisites are outlined: the presence of an AI chat client operating in streaming mode and the ability of an adversary to intercept network traffic between the client and the AI chatbot.

To mitigate the potential risks posed by this side-channel attack, the researchers suggest several countermeasures. These include implementing random padding to obfuscate actual token lengths, transmitting tokens in larger batches rather than individually, and sending complete responses at once rather than token-by-token.

Concluding their insights, the scholars emphasized the intricate balance needed between security, usability, and performance when developing AI assistants.