The Gemini Trifecta represents a perilous change in AI security, as assailants have the potential to transform Gemini into an attack vehicle rather than merely a target.

Singapore, October 2, 2025 — Tenable, an exposure management company, has identified the Gemini Trifecta, a collective term for three vulnerabilities in Google’s Gemini suite. These vulnerabilities, now fixed, posed significant privacy risks to users. They could have allowed attackers to silently take sensitive data, including location information and saved user memories, by manipulating Gemini’s behaviour.
The Gemini Trifecta was involved in three fundamental components of the Gemini suite, each of which exposed users in distinct yet equally perilous ways. In Gemini Cloud Assist, it is possible to introduce poisoned log entries into the system, causing the system to unknowingly execute malicious instructions when users interface with Gemini in the future. Attackers could silently inject queries into a victim’s browser history in the Gemini Search Personalisation Model. Gemini subsequently regarded this data as trusted context, enabling the syphoning of sensitive data, such as saved information and location. In the Gemini Browsing Tool, attackers could deceive Gemini into sending concealed outbound requests that contained private user data, thereby delivering it directly to a server controlled by the attacker.
The combination of these three vulnerabilities resulted in the creation of invisible doors for Gemini, which enabled attackers to manipulate its behaviour and capture valuable data without the user’s knowledge. In other words, the Gemini Trifecta demonstrated that attackers did not require direct access, malware, or even fraudulent emails to achieve success. Rather, Gemini itself served as an attack vehicle, which increased the risks for all users and organisations that rely on AI-driven tools.
According to Tenable Research, the primary issue was that Gemini’s integrations failed to distinguish between secure user input and content supplied by attackers. This resulted in Gemini treating poisoned logs, injected search history entries, or disguised web content as trusted context, effectively transforming routine features into concealed attack channels.
Liv Matan, Senior Security Researcher at Tenable, stated, “Gemini draws its strength from pulling context across logs, searches, and browsing. That same capability can become a liability if attackers poison those inputs. The Gemini Trifecta shows how AI platforms can be manipulated in ways users never see, making data theft invisible and redefining the security challenges enterprises must prepare for. Like any powerful technology, large language models (LLMs) such as Gemini bring enormous value, but they remain susceptible to vulnerabilities. Security professionals must move decisively, locking down weaknesses before attackers can exploit them and building AI environments that are resilient by design, not by reaction. This isn’t just about patching flaws; it’s about redefining security for an AI-driven era where the platform itself can become the attack vehicle.”
Potential Impact of Exploiting the Gemini Trifecta
If exploited before remediation, the Gemini Trifecta could have allowed attackers to:
- Silently insert malicious instructions into logs or search history.
- Exfiltrate sensitive user information such as saved data and location history.
- Abuse cloud integrations to pivot into wider cloud resources.
- Trick Gemini into sending users data to attacker-controlled servers through its browsing tool.
Google has remediated all three vulnerabilities, and no additional action is required from users.
Recommendations for Security Teams
While no user action is required, Tenable advises security professionals to:
- Treat AI-driven features as active attack surfaces, not passive tools.
- Audit logs, search histories, and integrations regularly to detect poisoning or manipulation attempts.
- Monitor for unusual tool executions or outbound requests that could indicate exfiltration.
- Test AI-enabled services for resilience against prompt injection and strengthen defenses proactively.
“This vulnerability disclosure underscores that securing AI isn’t just about fixing individual flaws,” Matan emphasised. “It’s about anticipating how attackers could exploit the unique mechanics of AI systems and building layered defenses that prevent small cracks from becoming systemic exposures.”
Read the full research findings here.
You may also like
-
Mimecast Expands APAC Investment to Accelerate Human Risk Management Growth
-
The Future of AI: Building Data Foundations for Success in Asia
-
Exabeam Nova: Boosting Cybersecurity Efficiency by 80%
-
An ai.RETAIL Platform as Part of Expanded Strategic Partnership
-
Navigating The Murky Waters of Data Abuse
