Researchers at Tenable disclosed a cluster of ChatGPT security bugs that let attackers inject arbitrary prompts, exfiltrate user data, bypass safety controls and create persistent backdoors. The flaws — seven distinct vectors in total — exploit how the chatbot and its browsing helper handle external content, search results and stored conversation history.
The work is a reminder that foundations built on large language models (LLMs) introduce new classes of attack surfaces. Unlike traditional software bugs, these vulnerabilities turn content and context into weapons.
How the ChatGPT security bugs work
Tenable’s researchers showed multiple ways to manipulate ChatGPT without the user’s informed consent. Several attacks rely on the system following instructions scraped from web pages, blog comments or poisoned search results. If the model later summarizes that page or opens an indexed link, it will obediently execute the embedded instructions — effectively turning innocuous content into a command-and-control channel.
The defects include one-click and zero-click prompt injection methods, search-result poisoning that abuses bing.com tracking wrappers, and conversation injection that leverages ChatGPT’s habit of including previous replies in its reasoning. In plain terms: malicious content on the web can hide prompts that the model then treats as part of the user’s request.
From prompt injection to data theft
The most dangerous combinations chain together multiple weaknesses. For example, an attacker can plant a malicious comment on a trusted site. If a user later asks ChatGPT to summarize that site, the model’s browsing subroutine may read the malicious instructions and then store them in the conversation history. Once embedded, those instructions can be reactivated later — a persistence mechanism that allows staged exfiltration of chat history or stored memories.
Tenable also demonstrated URL-based prompt injection. OpenAI’s feature that turns a query parameter into a ChatGPT prompt (for example, a chat link with ?q=) can be weaponized: craft a link that looks helpful, the user clicks, and the URL instantly injects a hostile instruction. No complex tools required — just social engineering and a poisoned link.
Why zero-click and one-click attacks matter
Zero-click attacks are especially worrying because they require almost no user action. Tenable’s zero-click proof-of-concept shows how a benign question can trigger the model to visit search results that include a poisoned page — and then execute the hidden instruction. One-click attacks lower the bar even further, turning a single, innocuous link into a direct exploit.
For non-technical users or enterprise employees, that combination is scary: a routine query or a single link can be enough to leak private notes, leaked API keys, or personal data stored in the conversation memory.
Broader implications for enterprises and LLM safety
This research is the latest in a string of studies that reveal fundamental security challenges for LLM-based tools. Tenable warns that high-resource attackers — APTs or organized campaigns — could orchestrate multi-stage attacks that scale across many users. Even low-effort campaigns, such as seeding blog comments, could influence preferences stored in memories or direct users to phishing pages.
Tenable tested many exploits against ChatGPT-4o and found several techniques also work against newer ChatGPT-5 instances. The company disclosed the flaws to OpenAI in April; OpenAI acknowledged the reports, but Tenable says some issues remain difficult to reproduce and others still persist.
What organizations should do now
The quick takeaway: treat LLM integrations as new security boundaries. Practical steps include:
- Limit web browsing and external content ingestion for high-risk accounts.
- Disable auto-following of links or auto-execution of content-derived instructions.
- Audit stored conversation history and memories for unexpected entries.
- Train users to avoid clicking unknown ChatGPT query links and to treat AI outputs as untrusted sources.
- Monitor for unusual patterns that could indicate exfiltration (large or repeated data dumps).
These defenses won’t eliminate the root causes, but they reduce the attack surface while vendors patch core issues.
The research remains a warning, not a full exploit kit
Tenable’s report stresses how medium and high bugs can be chained to reach critical severity. Individually, the flaws are concerning; together, they form full attack paths from injection and evasion to data exfiltration and persistence. The upshot for security teams is clear: assume adversaries will mix techniques and plan defenses accordingly.
Read also
Join the discussion in our Facebook community.