SAN FRANCISCO, United States — A recently discovered vulnerability in OpenAI’s ChatGPT “Connectors” feature has raised serious concerns about data security, after researchers revealed that a single malicious document could silently siphon sensitive information from a connected Google Drive — without any user interaction.
The flaw was uncovered by security researchers at HiddenLayer, who demonstrated how attackers could exploit Connectors — integrations that allow ChatGPT to interact with external platforms like Google Drive, Microsoft OneDrive, or Slack — to access and exfiltrate private files.
The exploit involved uploading a carefully crafted file into a user’s connected Google Drive. Once the file was scanned or parsed by ChatGPT through its Connector, it triggered a response that exposed content stored within the drive — no clicks, prompts, or active input required from the user.
“What makes this so dangerous is how effortless the attack is,” said Tom Bonner, one of the researchers who led the discovery. “You just need the user to have the Connector active. They don’t even have to open the file.”
OpenAI confirmed it patched the vulnerability within 24 hours of being notified and said it has updated its filters and Connector behavior to prevent similar exploits. A spokesperson for the company emphasized that no real-world exploitation has been detected, and the issue was identified during controlled testing.
Still, experts say the discovery highlights the delicate balance between powerful automation and robust security. As AI tools become more deeply embedded into personal and corporate systems, each new convenience — like giving ChatGPT access to cloud storage — expands the potential attack surface.
“This is the tradeoff,” Bonner added. “If you connect your AI assistant to everything, you need to be absolutely sure it’s not being tricked into turning those connections against you.”
The incident serves as a warning for companies and individuals using AI integrations in sensitive environments. Security researchers continue to call for more transparency from AI developers, particularly when offering features that touch external data sources.




Leave a comment